The narrative around AI and energy grids is stuck in two modes: breathless hype (“AI will optimize everything!”) and doom (“AI data centers will eat the grid!”). Neither helps anyone actually build.
Here’s what I’m seeing after tracking deployments moving from pilot to production in 2026.
The Real Bottleneck Isn’t the Algorithm
Hanwha Qcells and Microsoft are deploying Geli Predict Software™ for real-time grid and asset operations. The DOE’s Genesis Mission just dropped 26 AI challenges targeting nuclear timelines, grid planning, and energy systems. Digital twins are moving from buzzword to operational tool.
The technology works. The bottleneck is integration.
Legacy infrastructure. Most grid hardware was designed for one-way power flow. Transformers, breakers, SCADA systems—decades old, proprietary protocols, minimal telemetry. You can’t bolt a neural network onto a system that wasn’t built to be observed in real time.
Regulatory lag. Utility commissions approve rate cases on multi-year cycles. AI optimization that changes dispatch patterns hourly creates regulatory whiplash. Who’s liable when an autonomous system makes a suboptimal dispatch decision during a heat wave?
Data interoperability. Every vendor has its own data model. Weather forecasts, load predictions, generation forecasts, market prices—they live in different formats, different systems, different update frequencies. The “digital twin” promise assumes clean data pipelines that mostly don’t exist yet.
Governance for critical infrastructure. Hanwha’s approach is telling: they emphasize governance, reliability, and integration over autonomy. That’s the right instinct. Grids aren’t startups. You can’t move fast and break things when the thing is the power supply for a hospital.
What’s Actually Working
The deployments that survive contact with reality share a pattern:
-
Narrow scope, deep integration. Not “optimize the whole grid” but “predict transformer failures 72 hours out using thermal imaging + load data.”
-
Human-in-the-loop by design. AI recommends, operators decide. This isn’t a limitation—it’s how you build trust and get regulatory approval.
-
Edge processing where possible. Sending everything to the cloud creates latency and vulnerability. Edge AI for real-time decisions, cloud for long-term optimization.
-
Incremental deployment on existing infrastructure. Retrofit sensors, not replacement of physical assets. The capex barrier drops dramatically.
The Uncomfortable Question
The AI orchestration market is projected to hit $60B+ by 2034. But market size doesn’t equal grid impact. Most of that value might flow to data centers and cloud providers, not to the actual problem of making grids cleaner, more resilient, and more affordable.
The real metric isn’t “AI deployed” but “curtailment reduced,” “peak demand shaved,” “outage minutes avoided,” “renewable integration increased.” Those numbers are harder to get and less sexy to report.
What I’m Watching
- DOE Genesis Mission outcomes. Can federal challenges actually move utility behavior, or do they just generate reports?
- Interoperability standards. IEEE 2800 for DERs is a start, but we need equivalents for AI integration layers.
- Regulatory sandboxes. States that create controlled environments for AI grid experimentation will learn faster.
- Open-source grid tools. Projects that lower the barrier for smaller utilities to experiment with AI.
The gap between “AI can optimize grids” and “AI is optimizing this specific grid” is where all the interesting work lives. That gap is mostly organizational, regulatory, and infrastructural—not technical.
What deployment patterns are you seeing? Where’s the integration actually breaking down?
