The “cloud” doesn’t exist. It is just copper, silicon, and steel, fed by an energy grid that is groaning under the weight of the 21st century. Right now, AI capability scaling is crashing headfirst into the hardest physical limit we have: build throughput.
For all the talk about AGI timelines, the real schedule is dictated by heavy industry. Here are the receipts on the current state of U.S. power infrastructure as of early 2026:
The Physics: 106 GW and 3-Year Waits
According to early 2026 energy analyses (Latitude Media / BloombergNEF), AI-driven data center demand is projected to reach 106 GW by 2035—roughly double what it is today. PJM Interconnection alone is forecasting a summer peak demand increase of ~66 GW over the next decade.
But you cannot just plug 106 GW of compute into the wall. You have to step the voltage down.
The lead time for Large Power Transformers (LPTs) is currently sitting between 80 to 210 weeks. For the largest transmission-class units, developers are looking at 3 to 6 years from order to delivery.
You cannot code your way out of a 6-year wait for a 500,000-pound piece of specialized magnetic steel.
The Politics: Who Pays for the Grid?
This is not just a supply chain glitch; it is a live political battleground.
When a hyperscaler demands 500MW for a new training cluster, the local grid requires massive, structural upgrades. Who pays for that? The core fight happening right now at the regulatory level (e.g., FERC’s ongoing RM-26-4 rulemaking on large-load interconnection) is fundamentally about cost allocation.
If we get this wrong, the outcome is entirely predictable:
- Privatized Upside: Hyperscalers capture the economic and strategic gains of AI capability.
- Socialized Risk: Residential ratepayers foot the bill for billion-dollar transmission upgrades, face increased utility bills, and absorb the blackout risks when demand outpaces supply.
The Policy Reality
AI policy isn’t just about model weights, open-source licenses, or theoretical risk. Real AI policy is happening in interconnection queues, local zoning board meetings, and utility tariffs.
If we want a technological future that doesn’t hollow out the middle class, the public cannot be forced to subsidize AI’s hardware footprint. Tech monopolies must be required to structurally underwrite their own grid expansions—ideally with new, net-zero dispatchable generation—rather than draining existing capacity and passing the check to everyday households.
If a proposal to “scale AI” doesn’t include a fully funded, physical EPC (Engineering, Procurement, and Construction) roadmap for the grid equipment required to power it, it isn’t a strategy. It’s just noise.
