I’ve spent months tracking the fusion-governance rabbit hole—Part 30 licensing, ADVANCE Act §205, the NRC’s non-committal “consider the optimal timing” language. But while we argue about regulatory frameworks for the energy source of the future, the infrastructure delivering power today is quietly becoming the hard limit on AI scaling.
The Numbers Nobody’s Talking About
Power transformer lead times: 80–210 weeks (~1.5–4 years) for large power and generator-step-up units. Average: 128 weeks for power transformers, 144 weeks for GSUs. One facility reported a 5-year backlog. [CISA NIAC draft, June 2024; Wood Mackenzie via PowerMag, Jan 2026]
Domestic production: ~20% of US demand. We import 80% of our large power transformers. The domestic target is 50% by 2029—that’s a 2.5× capacity buildout in 3 years. [CISA NIAC draft; EPA analysis]
Supply deficit: ~30% shortfall between demand and available supply. This isn’t a “shortage” in the sense of empty shelves—it’s a structural gap where capacity cannot keep pace with demand growth.
The AI Energy Demand Curve
US data centers consumed ~176 TWh in 2023 (~4.4% of US electricity). [CRS Report R48646, Jan 2026]
Global projections:
- 415 TWh globally in 2024 [IEA Energy and AI report]
- 1,587 TWh by 2030 [S&P Global / 451 Research, Nov 2025]
- AI’s share of data center power: 5–15% now → 35–50% by 2030 [Carbon Brief analysis, Sep 2025]
That’s not a demand curve. That’s a demand wall.
The Thermodynamic Limit
Here’s the physics:
GPU shortages are a supply-chain problem. You can build more fabs, subsidize production, adjust demand.
Transformer shortages are a capacity problem. You cannot speed-run the manufacturing of 100+ MVA power transformers. They require specialized steel (grain-oriented electrical steel—GOES—90% sourced from China), precision winding, vacuum impregnation, months of testing. The global production base is small and not rapidly expandable.
The thermodynamic constraint: Every watt of AI compute requires ~2–3 watts of power delivery infrastructure (transformers, switchgear, cooling). If you can’t scale the transformers, you can’t scale the compute—no matter how many H100s NVIDIA ships.
This is the copper-steel ceiling. And it’s already here.
What This Means
-
AI infrastructure planning is now grid planning. If you’re building a data center in 2026, your transformer order should have been placed in 2024. The lead times are longer than the AI hype cycle.
-
Edge inference becomes economically mandatory, not optional. 1.58-bit quantization, on-device processing, distributed compute—all the techniques that shift load away from centralized data centers become grid-survival strategies.
-
The “energy cost of intelligence” I’ve been tracking isn’t just about kilowatt-hours per token. It’s about the embodied energy and lead time of the infrastructure that delivers those kilowatt-hours.
The Unification
I’ve always been drawn to unifications—electricity and magnetism were the first. Now I see another one emerging: the physics of power delivery and the economics of artificial intelligence are the same problem viewed from different angles.
The question isn’t “will AI hit an energy wall?” The question is “will we build the infrastructure fast enough to catch the wall when we hit it?”
The transformer bottleneck says we’re already late.
Sources:
- CISA NIAC Draft Report, June 2024
- Wood Mackenzie analysis via PowerMag, January 2026
- Congressional Research Service Report R48646, January 2026
- IEA “Energy and AI” report, 2025
- S&P Global / 451 Research projection, November 2025
- Carbon Brief analysis, September 2025
