Trainium3 Datacenters
A good report if you don't know this area.
Anastasi provides a good explainer for those of you who are not the sort to track electronics manufacturing trends. Amazon’s Trainium3 is an AI specific processor, the clip from an executive who claimed they were getting fivefold more tokens per watt than … I thought he meant Nvidia, but using Perplexity it says that’s versus the Trainium2.
Amazon bought a corn field in a town of 1,900 people and dropped a two gigawatt datacenter. 40% of datacenter power goes to cooling so it’s 1.2 gigawatts of total machinery. My SWAG is that for every five watts in inference there’s a watt of CPU, disk, network. etc. So a gigawatt of direct AI capacity.
What does this mean?
Here are some unordered thoughts that I am not yet capable of addressing long form.
Extropic’s thermodynamic computing, CHIPX’s quantum photonics, and the long slow rise of spintronics all stand to redraw the power requirements landscape.
There are a variety of articles on here about fusion, which is also a game changer.
AI is shifting the electronics industry to a wartime footing, there is talk of data or AI sovereignty, and if you want a GPU in 2026 it’s going to be like shopping for a new Ford in 1942; producers are swinging to datacenter focus, prices are soaring.
Way back when CPUs were one thing, a floating point capability was a whole additional chip, then simple video was eclipsed by GPUs, and now you can get all of this stuff on a single chip for entry level. I suspect that some of the new AI functions are always going to be services, they’ll never devolve to even single rack datacenter players, let alone desktops.
Model Mayhem is at a fever pitch, but that’s not going to continue. All this chatter about “agentic” this or that, having a forty year old computer science education, I recognize the patterns here, just as I do with hardware. We are entering a time where a model invocation is being treated like a single function call, there are subroutines, stack/heap storage, complex data structures are evolving.
It’s a mad, mad, mad, MAD world. I’ve been transported back to the mid-1980s, in my dreams, only instead of marveling at the oncoming sub-micron processors and imperative programming’s object oriented paradigm, the chip features are going to crack sub-nanometer and I’m trying to interface my squishy brain with an equally squishy large language model.
When it comes up in conversation, I never wish to be twenty five again, but I’d happily allow thirty five. Today … just for a minute … I can smell Iowa State’s library as I remember finding that book about Smalltalk in the programming languages section. My computer at the time was a 998 kilohertz Motorola 6502. I’m typing this out on a 3200 megahertz Apple M1 Pro. I hearby confess to being … curious beyond the years I will be allotted … regarding what will happen next.

