Apple’s M1 Chip Changes… Lots of Things

The new Apple Macs with an M1 chip in them is finishing a job that started a few years ago: changing my assumption that commodity hardware would always win. Having worked in the technology/computing field for over 30 years, you’d think I know better by now not to make such a broad assumption, even internally. For years, I thought the juggernaut of Intel/x86 was unstoppable, but now? Now I’m questioning that.

Apple has always prided itself on its deep integration of hardware and software. And sometimes they’ve even made good on it. The iOS (and iPadOS) integration has been impressive for the last fifteen years, and now they brought it to their laptop lineup, spectacularly so. It’s not just Apple doing this – Samsung and some other manufacturers have been down this road for a while, merging in computing silicon with sensors at a deep foundational level, changing what we think of as components within computing systems. Some of the deeply integrated cameras are effectively stand-alone systems in their own right. Lots of phones and digital cameras use that very-not-commodity component.

I still think there’s a notable benefit to leveraging commodity hardware – shoot, that’s what this whole “cloud” computing market is all about. But it’s also pretty clear that the guarantee that the win that commodity gives you won’t necessarily outweighs the commercial benefits of deep hardware/software integration.

One of the interesting things about the M1 system-on-a-chip isn’t the chip itself, but the philosophy that Apple’s embracing in making the chip. That pattern of behavior and thought goes way beyond what you can do with commodity stuff. The vertical integration allows seriously advanced capabilities. Commodity, on the other hand, tends to be sort of “locked down” and very resistant to change, even improvements. Then pile on top of that the tendency for these chip designs to be far more modular. They’re aggressively using coprocessors and investing in the infrastructure of how to make them work together. I think that’s the core behind the unified memory architecture. What Apple has done, or started to do, is invest in the ways to go even more parallel and make it easier for those co-processors to work together. In a commodity (Intel) system, the rough equivalent is the PCIe bus and the motherboard socket you plug cards into. Only that doesn’t solve the “how you share memory” or “who talks to who, and when” problems. In fact, it kind of makes it worse as you get more cards, sockets, or components.

I may be giving Apple’s hardware engineering team more credit than they’re due – but I don’t think so. I think the reason they talked about “Unified Memory Architecture” so much with this chip is that it IS a solution to the “how to get lots of co-processors to work together”, while not exploding the amount of power that a system consumes while doing so. (If you’re not familiar, memory is a “sunuvabitch” when it comes to power consumption – one of the biggest sinks for power in a modern PC. The only thing that’s worse than memory are ethernet network ports, which was a serious eye-opener for me back in the day.)

There are other computing changes that aren’t trumpeted around so much that are going to contribute equally to the future of computing. I was introduced to the world of NVMe (Non-volatile memory) a few years back, when it was just hitting it’s first iteration of commercial introduction. The “holy crap” moment was realizing that the speed at which it operated was equivalent to those power hungry memory chips. Add that into the mix, and you’ve got lots of compute, persistent memory, and far lower power requirements in the future for a heck of lot of computing. It’s the opposite direction that Intel, and nVidia, are charging into – assuming they’ll have power to spare, and can happily burn the watts to provide the benefits. Truth be told, only individual consumers were still really oriented that way – the cloud providers, over a decade ago, had already clued in that the two most expensive things in running cloud services were 1) power and 2) people.

Bringing this back to the M1 chip, it’s tackling the power component of this all very impressively. I’m writing this on an M1 MacBook, and loving the speed and responsiveness. Honestly, I haven’t felt a “jump” in perceived speed and responsiveness to a computer with a generational gap like this in over 15 years. And that’s WITH the massive power reduction while providing it.

I do wish Apple was a little more open and better about providing tooling and controls to help solve the “cost of people” equation of this situation. Yes, I know there’s MDM and such, but comparatively it’s a pain in the butt to use, and that’s the problem. It needs to be simpler, more straightforward, and easier – but I suspect that Apple doesn’t really give much of a crap about that. Maybe I’m wrong here, but the tooling to make it really, really easy to combine sets of systems together hasn’t been their strong point. At the same time, groupings and clusters of compute is just the larger reflection of what’s happening at the chip level with the M1 SOCs (For what it’s worth: SOC stands for “system on a chip”).

I think you can see some hints they might be considering more support here – talking about their AWS business interactions a bit more, and on the software side the focus on “swift on the server” and the number of infrastructural open-source pieces they help lay out over the past year that are specific to capturing logging, metrics, tracing, or helping to dynamically create clusters of systems.

I think there’a s lot to look forward to, and I’m super interested to see how this current pattern plays over of this coming year (and the next few). Like many others, I’ve raised my expectations for what’s coming. And I think those expectations will be met with future technology building on top of these M1 chips.

Published by heckj

Joe has broad software engineering development and management experience, from startups to large companies. Joe works on projects ranging from mobile to multi-cloud distributed systems, has set up and led engineering teams and processes, as well as managing and running services. Joe also contributes and collaborates with a wide variety of open source projects, and writes online at https://rhonabwy.com/.

%d bloggers like this: