The Shifting Landscape Of High Power, Low Energy Consumption CPU Design

With key CPU design personnel exiting Apple and moving to other players, this article discusses the significance of the recent moves and what they might mean for corporate strategies. It’s a shifting landscape that could affect us all.

It’s very rare for a film crew to gather around a monitor, evaluate the image, and say “it’s a good thing the lighting manufacturer has such an amazing chief financial officer.” Without implying any disrespect to the valuable work done by people in administrative roles, that’s one reason that technical publications don’t often cover appointments. Still, as we leave 2022 behind, let’s consider one staff movement which happened back at the start of the year and which might have repercussions for not only film and TV, but all of information technology.

In January, systems specialist Jeff Wilcox departed Apple for Intel. Wilcox is often credited as the man most responsible for Apple’s M1 CPU, probably the biggest hardware announcement of an otherwise pandemic-distracted 2020. The eight-core M1 came first in October 2021, preceding the high-end M1 Ultra, which is based on two M1 Max chips combined for a total of twenty cores. The succeeding M2 is apparently to follow early in 2023. Crucially, it was immediately obvious that M1 does a lot of work for not much power, a crucial ability in laptops.

This was always part of what the ARM CPU design had promised, and it’s easy to think other manufacturers might like some of the same success. It’d be easy to over-interpret the Wilcox hire as signalling an upcoming Intel competitor to M1, but there are several plausible outcomes which hint at quite a large upcoming change in information technology. That matters to anyone who’s ever brought out a laptop or even a portable workstation on a film set, especially as more and more production equipment turns into software.

Beyond Apple

Tellingly, Wilcox wasn’t the only transfer in the last few years. Gerard Williams is named on Apple patents beginning with the A10 series chips built for phone and tablet, and had been heavily involved in the A7 to A12 generations. He left Apple in 2019, before M1, to found chip design house Nuvia with fellow Apple alumni Manu Gulati, a senior CPU architect, and John Bruno, whose career began at ATI (now AMD’s GPU division) in the mid-90s.

Nuvia was every inch the Silicon Valley startup, venture funded and snapped up early by phone-chip supremos Qualcomm in a $1.4 billion acquisition during March 2021. The significance of that requires no sleuthing, as Qualcomm has long been public about its interest in other types of CPU. It announced the Centriq line of 64-bit, ARM-based processors in 2017 when cloud services providers were already keen to mitigate the power consumption of their data warehouses.

All of this takes place in the presence of a behemoth. Intel had long made money out of launching slightly more power-efficient CPUs in its Xeon series, offering sufficient economies to prompt upgrades of server farms. Students and tech-savvy workers-from-home were often able to build workstations from decommissioned servers which found their way onto eBay. Those people generally care far less about power consumption, something that will become important later.

Compared to what M1 would later achieve, Intel’s power savings were decidedly incremental. Worse, that somewhat muted level of achievement would prove to be a long term issue. Intel has attempted to boost the power efficiency of its own CPU designs, particularly the current (in December 2022) Alder Lake laptop chips which, like many ARM-based designs, use a combination of high efficiency and high power cores. Those attempts have been dubiously successful, with something like a 12-core i5-1240P consuming five times the power of an M1 for similar performance.

Disadvantage, the incumbent?

In mid-2020, Intel announced a delay in its latest manufacturing technology of something like two years, which may be at least part of the reason the company is behind the curve. The resulting opportunity was not only recognized by Qualcomm. At around the time of the company’s 2017 Centriq announcement, Renée James founded Ampere Computing, which is currently pushing what it calls the Altra Max Cloud Native processor for server farms. HP intends the eleventh generation of ProLiant servers to use the device, and both Google Cloud and Microsoft’s Azure platform are either using Altra or soon will be.

None of these are really consumer products, though in 2021, Qualcomm stated a bold intention to create what it termed desktop-class CPUs. In November this year, Arm’s CEO, Cristiano Amon was confident enough to refer to current events as an “inflection point.” That’s a weasel term beloved of cautiously ambiguous businesspeople, but ARM CPUs were predicted to be in desktop PCs by 2024. Sure enough, now, in December 2022, consumer products are just barely appearing. The Lenovo X13 laptop runs Windows 11 on either Intel Core i5 or Qualcomm Snapdragon.

For its own part, Intel seems content to disrupt the GPU market with its Arc Alchemist line of graphics processors. It’s clearly not the intention that Arcs should compete on sheer performance with the best of Nvidia. Even so, they review very positively on price-performance ratio, and in the inflated GPU market of late 2022 it’s not hard to imagine future machines involving not so much Intel and Nvidia as they do now, but instead, Qualcomm and, well, Intel.

Investor guide

If this were a discussion about whether to buy stocks in Intel or Qualcomm, the answer would presumably be, er, Softbank (which owns ARM) or AMD (which is already making competitive devices), or the Taiwan Semiconductor Manufacturing Company, which is likely to be the company which actually builds chips for almost everyone except Intel. Samsung’s up-and-coming chip foundry has also been proposed.

Conversely, if this were a discussion about whether ARM is likely to completely change the world of computing – well, it already has, given how much general computing has moved from desktops and even laptops to pocket devices. Given the popularity of declaring a concept dead, though, let’s consider whether Intel-compatible CPUs are likely to fade completely.

One issue is whether the things which make efficient ARM chips useful in laptops and servers are actually of interest to workstation users. Again, it’s hard to compare completely different designs, but power aside, the CPU performance of M1 is no larger than many Intel or AMD designs, and the GPU component doesn’t have anything like the punch of a big Nvidia. M1 is a high efficiency processor, not a massively high performance processor, much as its performance is very respectable.

Intel knows all this. Let’s also recall that Wilcox hire, and consider the fact that the company has something that most other manufacturers have to send out for: manufacturing capability. Intel can manufacture cutting-edge semiconductors in house as opposed to contracting the work out, so if the company wanted to risk cannibalizing its own market, it could probably create an M1 competitor.

What’s often overlooked is that Intel is not the only game in town. The most breathless claims about Apple’s CPUs have often overlooked AMD. The company’s Ryzen 7 series still struggles to match M1’s power-efficiency on simple single-threaded tasks, though not as badly as Intel. When workload grows, though, the Ryzen 7 6800U benchmarks within about 5% of M1’s performance-per-watt scores on Cinebench R23, and generally outpaces the Apple slightly.

Implementation details

If the battle is between ARM or not-ARM, the big issue is how the frugality of M1 has been achieved, and whether similar techniques might let a company like Intel or AMD recover some of its lost ground. It’s a complicated question.

First, M1 has high-performance cores and low-power ones to save time when things aren’t busy, but then so does Intel’s Alder Lake mobile series; most CPUs in 2022 can also throttle their clock rates down when there isn’t much to be done.

Perhaps more profoundly, chips with smaller features usually lead to lower energy demands. There’s a lot of promotional effort involved, and the numbers aren’t necessarily comparable between manufacturers, though Apple describes its M-series as being manufactured on a 5-nanometer process. The Ryzen 7 we talked about above is reportedly a 6nm device. In December 2020, Intel announced that its 7-nanometer production facility faced up to two years of delay. That’s probably too late to have directly influenced Apple’s decision to build M1, but it’s hardly a triumph.

Another confounding issue is that ARM is an example of reduced instruction set computing. RISC CPUs offer a limited number of simple instructions, which seems like a limitation, but limited operations make for simpler chips, reducing cost and power consumption. The compromise is that any program needs more instructions to do a task, but those long sequences of simple instructions can be put together when the software is being designed, when the pressure is off. Ideally, this makes the CPU fast and frugal.

RISC was hot in the 90s, when it was slowly realized that the expected performance gains were not quite as advertised. Consider that Power Macs were RISC designs, and Apple walked away from that idea with its subsequent Intel transition. The comparison between RISC and non-RISC is also muddied by the fact that many modern CPUs – including those by AMD and Intel – are at least somewhat RISC-like at their core, and effectively emulate more complex instructions. It’s hard to say how much RISC is helping M1.

What probably helps more is the fact that most ARM devices, including M1, are systems-on-chip (SoC). Placing all of the CPU cores, memory and GPU on one device (not one piece of silicon, but one physical carrier) means less energy spent driving high-speed external buses, and a lot of benefits in physical compactness. Most SoCs also use a unified memory model, which means all of the system’s memory is equally accessible to both the processor cores and the graphics hardware. That’s long been ubiquitous in cellphones and tablets, but it’s new to laptops, much less workstations.

So, to recap, chips like M1 can do more work for less power because they can fall back to lower-performance cores for simpler work, they throttle readily, they use a reduced instruction set, they are made on a very small process, and they’re a system-on-chip design.

Efficiency beyond ARM

It isn’t clear, then, that there’s anything intrinsic to M1, or to ARM processors in general, that couldn’t be achieved in a non-ARM design, though compatibility with the old world does imply certain inefficiencies that may take a lot of working-around. There are plenty of ARM devices which don’t have the same efficiency as AMD’s best Ryzens, not least the third generation of Qualcomm’s 8cx, used in the Microsoft Surface Pro 9. The 8cx is interesting because it’s also the core of Microsoft’s ARM developer kit. The kit establishes Microsoft’s clear interest in ARM on the desktop, though it is rather unfairly compared to M1 Mac Minis, because they look cosmetically equivalent.

They’re not. The Microsoft ARM dev kit, which appears to be based on an actual Surface Pro 9 motherboard with an add-on connectivity daughterboard, is more like Apple’s Developer Transition Kit, which Apple released to allow developers to prepare for the M1 transition. The device was based on an Apple A12Z system-on-chip originally seen in iPads. It remains to be seen whether Microsoft’s effort will catch anyone’s imagination.

Workstations?

Only some of the things which make M1 efficient are actually valid for workstations. Most of all, the system-on-chip approach doesn’t usually allow for expandability; you can’t plug more RAM into an M1 Mac. Adding external buses to allow for normal levels of workstation expansion begins to eat into the power savings, and, again, it’s not clear whether power consumption is something that matters to workstation operators as much as to cellphone users or server farm designers.

Still, there are some pretty confident indications from the industry that a big change to CPU design might be imminent, even to the point of upsetting an Intel (or at least Intel-compatible) incumbency that’s lasted since at least the mid-80s. Windows for ARM already exists, and if the rest of the software world can keep up with the changes, the change might even be reasonably easy. Or, Intel may execute another comeback, as it did during AMD’s ascendancy in the early 2000s, and ARM may find a sterner competitor than it had anticipated.

Much depends on the behavior of incumbents in several industries. Strategically the big decisions currently involve investors, not technologists, but the whole situation may start to affect purchasing decisions soon.

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…

The Resolution Revolution

We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?

Microphones: Part 3 - Human Auditory System

To get the best out of a microphone it is important to understand how it differs from the human ear.