Apple’s Next Generation Processor Is All About Power… Consumption
The last twenty years has seen a lot of film and TV hardware turn into software that runs on workstations and servers. With Apple’s move away from Intel, though, and several other recent attempts to popularize alternative designs, anyone looking at new equipment finds a field in flux.
How We Got Here
For most of computer history there have only really been a few lines of computer processors to choose from, and for much of that history Intel's designs have been at least a leading choice. It's been many years, though, since graphics card manufacturer Nvidia turned its picture-generating hardware, GPUS, into a more general-purpose processing system that's fantastic at repetitive tasks. Apple's new CPUs aren't the first time the ARM architecture has been proposed for applications outside cellphones, and they've highlighted some of the caveats of Intel's approach.
We probably shouldn't be all that surprised that some big changes are afoot. We've been using Intel processors, and compatible options from companies including AMD and VIA, more or less the entire history of workstations and servers. The first true central processing units were developed in the 1970s, such as the Z80, which powered Clive Sinclair's original Spectrum. It was designed to run the same software as the Intel 8080 which itself presaged the 8085 and 8086, and thus the line of x86-compatible Intel designs we use to this day, meaning it's a line of technology with some significant historic baggage.
That's almost as true of ARM, though, which derives, rather indirectly, from 1980s work by Acorn Computers. The differences are in the details.
The Differences
CPUs execute instructions, which might involve adding two numbers, retrieving a number from memory, or storing a number in memory. Instructions run in sequential order, with options to jump around the list of instructions depending on the results of calculations (this derives ultimately from Alan Turing's seminal work). More bits, as CPUs moved from 8 to 16 to 32 and 64 bit, let them handle larger numbers. Designers also tried to provide a richer set of instructions so that each instruction could do more useful work, and at the same time increase the rate at which those instructions could be executed.
The result was the situation that existed from the late 70s through to the late 90s in which manufacturers competed mainly on speed and richness of instructions, a fight we recognise from Intel commercials advertising "MMX" and "SSE" technologies. Even by the late 80s, though, it had been suggested that this wasn't the best approach. Adding more instructions created difficulties in making the CPU go faster. The alternative was called reduced instruction set computing, RISC, in which the instructions were generally kept simple, so that they could be run faster.
Conventional, complex instruction set CPUs include Intel's flagship line as well as the Motorola 68000 series found in Apple, Amiga and Atari desktops of the 80s and 90s. RISC options included the IBM Power series to which Apple moved in 1994, MIPS processors found in Silicon Graphics workstations and the first two Playstation games consoles, Sun Microsystems' SPARC, and the Acorn RISC Machine which would become ARM.
Promise Unfulfilled
The caveat of RISC is that more instructions would be required to do any given job, and it met with mixed success; lots of simple instructions could end up being as time-consuming as a short list of complex instructions. Engineering wasn't the only reason for RISC's slow adoption, though; around the same time, the massive success of Intel's x86 series attracted vast research and development investment, particularly given healthy competition from the likes of AMD and Cyrix in the 90s.
With Intel in mind, it's worth pointing out that RISC and CISC were not the only approaches considered. Intel's own Itanium design was another approach to solving the limits of CISC using a technique the company called EPIC, for explicitly parallel instruction computing. The idea of doing more than one thing at once - parallelism - is familiar from multi-core CPUs, which effectively package several CPUs inside one device. They need to be simultaneously fed with one list of instructions per core, though, and it's up to the programmer to manually split the work to be done into those tasks and make the best use of resources.
An EPIC-based Itanium was not a conventional multi-core CPU; it divided work on a more granular level, with the process of splitting instructions up into separate lists done automatically when the software was written. That turned out to be an example of a problem which dogs computer science still. It's surprisingly difficult - perhaps impossible - to automatically divide computing jobs into separate tasks without creating logic problems when one calculation is dependent on the result of another. Perhaps because of that, the promised benefits didn't quite match the initial hopes, and Itanium struggled to match Intel's other CPU lines. It soldiered on in the server market until recently, when Intel finally pulled the plug.
The Present
So the controlling factor for CPU success, at least until recently, has not been a sophisticated design approach. What's made the difference is the sheer amount of development work put into them. Intel's x86 series has done well because it is highly developed, not because it is based on a particularly advanced underlying concept. There is a key irony, though, in that modern Intel and Intel-compatible CPUs are more or less RISC-based, and use a sort of inbuilt translation layer to handle Intel-compatible software.
That slightly convoluted approach apparently provides the best overall performance, but it would be reasonable to speculate that it would make more sense for software to be written to run directly on whatever CPU is available. The overwhelming disincentive is that so much software is written for Intel CPUs. It's a bullet Apple has been willing to bite in the past, though, going from Motorola's 68000 CPUs to the PowerPC 600 line in the mid-90s, from PowerPC to Intel in 2005, and now from Intel to ARM. Each change required a lot of software to be at least reworked, and while the 68000 series and PowerPC were effectively end-of-line at the time Apple moved away, Intel certainly wasn't. Apple's move, then, might tell us something about the strengths and weaknesses of the various approaches that have been tried over the last decade or four.
ARM is certainly a RISC-based design, but one that doesn't have to go through the process of emulating something that isn't a RISC-based design, as Intel-compatible options do. As a result, the fastest new Apple CPUs aren't necessarily that much faster than those made by other companies, especially if we consider AMD's very competitive Ryzen options. The new M1 design is, however, vastly lower in power consumption and thus much more suitable for Apple's key laptop market.
The Future
People interested in non-Apple equipment, things like big video editing servers, visual effects rendering and the huge international market of cloud servers, might look enviously at what Apple has been able to achieve. A huge part of the cost of running a server farm is electrical power, for computers and the air conditioning required to handle the heat generated by all that power being consumed. ARM-based server CPUs have been proposed before; as early as 2017, Qualcomm began pushing Centriq. Until very recently, though, the performance of each individual ARM core has not been competitive with the likes of Intel, and Centriq was launched as a 48-core option to keep the overall performance high.
The problem is that high core counts run into the same old issue of splitting work up into individual tasks, the existing concerns with writing software for multi-core CPUs. As such, we might expect to see some significant changes in the CPU market in the next few years, which might mean a move toward ARM (which can be licensed by anyone) or a response from Intel, if the company is willing to work hard on reducing power consumption.
The visible change for users, in any case, might be scant; the challenges of writing good code for multi-core CPUs remains at least something of an unsolved problem. No matter what happens in the end, it's likely the next few years will see some big changes in the hardware we use and, consequently, companies we get it from. We can't predict the future, but understanding how we got to this point, and the differences between the principal technologies, can help us react to that unpredictability with better decisions.
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
The Resolution Revolution
We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?
Microphones: Part 3 - Human Auditory System
To get the best out of a microphone it is important to understand how it differs from the human ear.
HDR Picture Fundamentals: Camera Technology
Understanding the terminology and technical theory of camera sensors & lenses is a key element of specifying systems to meet the consumer desire for High Dynamic Range.
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.