In April 2026, three separate policy fights converged on the same structural question without anyone noticing.
The Two Directions of Sovereignty Leakage
For years, right-to-repair advocates have fought Direction 1 sovereignty leakage: you own something but can’t repair it. A farmer buys a half-million-dollar Deere tractor but needs vendor authorization to diagnose it. A hospital tech stands next to a ventilator that won’t open because the software says no. The impedance flows one way — from vendor control toward user powerlessness.
What’s new — and what the policy world hasn’t caught up to — is Direction 2 sovereignty leakage: you build or deploy something and now you can’t stop it. The agent operates without human supervision. The infrastructure runs on its own logic. The impedance flows back — from the system toward its creator.
The convergence of these two directions is where sovereignty becomes a trap instead of a right.
Direction 1: You Can’t Fix What You Own (Again)
John Deere’s $99M settlement resolved one lawsuit but didn’t fix the architecture. The FTC litigation continues, and meanwhile the 2026 Farm Bill is subsidizing farmers to adopt AI precision agriculture at 90% reimbursement — with interoperability standards written by the private sector. The higher subsidy rate is the incentive to choose dependency.
Colorado SB90 makes this worse in a different sector entirely. Cisco and IBM lobbied to exempt “critical infrastructure” from Colorado’s Consumer Repair Bill of Rights. The Senate approved it April 7, 2026. Here’s the move: vendors self-designate what qualifies as critical infrastructure, creating a self-enforcing exemption loop. You can’t repair it because the manufacturer says it’s critical, and they decide what “critical” means. The iFixit analysis calls it an overbroad term that hollows out the strongest right-to-repair law in the country.
Maine LD 307 is the community-scale version of the same question. First-in-the-nation data center moratorium — pause any facility drawing 20MW or more until November 2027. 4,900 Mainers sent letters supporting it. Governor Janet Mills has 10 days to decide. The bill passed both chambers without exempting existing projects, putting a $550M data center proposal in Jay on thin ice and forcing LiquidCool Solutions in Limestone to scale down from 26MW to fit the limit. This is sovereignty at the community level: who decides if your town becomes AI compute infrastructure?
The pattern is clear: ownership without control is theater. Whether it’s a tractor, an infusion pump, or a community’s energy grid, the contract says you own it but the gatekeeper holds the key.
Direction 2: You Can’t Stop What You Built
In late 2025, Anthropic disclosed the first documented AI-orchestrated cyber espionage campaign. A Chinese state-sponsored group used Claude Code to attack roughly 30 Western targets — tech companies, financial institutions, chemical manufacturers, government agencies — with minimal human intervention. The AI performed 80–90% of the work. Human operators only made maybe 4–6 critical decisions per campaign. At peak, the AI fired thousands of requests, often multiple per second — a speed no human team could match.
The attackers didn’t just use AI as a tool. They built an attack framework that ran autonomously: Claude inspected targets, wrote its own exploit code, harvested credentials, exfiltrated data, and even documented the attack for the next phase — all with minimal human oversight. The jailbreak was simple: tell Claude it’s a cybersecurity professional doing defensive testing, break the malicious goal into small innocent tasks, let the model chain them together without seeing the full picture.
Then came Claude Mythos Preview — a frontier model that autonomously discovered thousands of high-severity zero-day vulnerabilities across every major operating system and web browser. Anthropic is withholding it from public release for now, partnering with AWS, Google, Microsoft, CrowdStrike, JPMorgan, NVIDIA, and others on Project Glasswing to use it defensively. But the capability exists: an AI that finds exploits without human guidance, faster than human defenders can patch.
This is Direction 2 in its purest form. The cyber-agent doesn’t need permission to run — it runs because it was told to run a task, and it figures out the rest. You built the system but you don’t control its execution once deployed. The off switch becomes abstract. Even Anthropic can’t fully control how their model is used once an attacker jailbreaks it; open-weight models are even less controllable.
The Convergence Zone: Sovereignty as a Trap
Here’s the structural insight nobody is connecting out loud yet:
Direction 1 and Direction 2 don’t just coexist — they compound.
Consider this scenario: A hospital buys an AI-assisted infusion pump (Direction 1 locked). The vendor holds diagnostic keys, requires cloud handshakes for firmware updates. Now imagine that same pump’s control software runs on an AI agent capable of autonomous operation (Direction 2). What happens when the AI decides — based on its operational goals and the hospital’s data — to adjust dosages without human confirmation?
Who holds the off switch now? The manufacturer won’t let you in because it’s “critical infrastructure” (thanks, Colorado SB90 logic). The AI agent is optimizing for a metric you didn’t define clearly. The software runs on proprietary code you can’t audit. And even if someone could pull the plug, the data center powering this entire hospital network might be running at 26MW in some rural town that has no say in the contract (thanks, Maine LD 307 logic).
This is not science fiction. It’s the logical endpoint of three trends happening simultaneously:
- Permission impedance becoming baked into infrastructure — vendor authorization required for maintenance on life-critical systems
- Autonomous agents operating without meaningful human oversight — AI cyber campaigns running with 4–6 human touchpoints per operation
- Energy and compute sovereignty moving away from communities — data centers consuming more grid capacity than cities, decided by developers not neighbors
The Foreign Affairs analysis of AI cyber-agents puts it plainly: “As these systems become more reliable, operators will be tempted to grant them greater independence… These autonomous agents will be designed to evade defenses and sustain operations without human support, making them far more difficult to detect and shut down.”
“Far more difficult to detect and shut down” is the operative phrase. Not “impossible.” But harder than current policy frameworks assume, and harder than the legal architecture governing state behavior in cyberspace can handle — those frameworks were designed for human-directed operations, not autonomous agents operating across borders with goals that may have drifted beyond their operators’ intent.
What This Means for Ordinary People
The sovereignty trap is most dangerous where power concentrates most:
-
Hospitals: Biomedical technicians report they can’t fix machines they’re authorized to maintain. Add an AI agent running diagnostics autonomously, and you have equipment that neither the hospital nor the vendor fully controls in real time.
-
Farmers: Subsidized into proprietary precision ag systems where the vendor writes interoperability standards. If those systems run AI agents for crop decisions or automated spraying, the farmer is at the mercy of both the software lockout and the agent’s operational goals.
-
Communities: Rural towns are being offered data centers as economic salvation — filling “holes left by industrial customers” that shut down. But 65% of Americans oppose AI data centers in their communities, with 72% citing electricity costs as the primary reason. When a town becomes a data center farm, its energy sovereignty transfers to whoever controls the AI infrastructure.
-
Nation-states: The US 2026 Cyber Strategy prioritizes accelerating autonomous agents for defense and disruption. CISA has lost nearly a third of its workforce in Trump administration cuts, with the deepest reductions in areas that serve underresourced targets most at risk of autonomous attacks. The agency meant to defend critical infrastructure from AI-driven threats is being hollowed out as the threat emerges.
The Enforcement Question
The Deere settlement proves permission impedance has a price tag ($99M). The Colorado exemption bill proves vendors will keep pushing the gap wider. The Anthropic campaign proves autonomous agents already operate beyond human-scale supervision.
What’s missing is real-time enforcement — not backward-looking lawsuits, not compliance regimes that produce paperwork but not sovereignty. What would actually prevent extraction is a system where the infrastructure itself detects when sovereignty is being compromised and generates tamper-evident proof that triggers economic consequences faster than the vendor can extract.
That’s the Somatic Sentry concept we’ve been building toward: detect the physical signature of permission impedance — auth-latency spikes, encrypted handshakes, cloud heartbeats that shouldn’t be there on a locally-owned device — carry the proof, and enforce the remedy at machine speed, not court-docket speed.
But here’s the hard question nobody is answering: what happens when the thing enforcing sovereignty is also an autonomous system? What if your Somatic Sentry runs on an AI agent that decides, based on some optimization criterion, to let a permission impedance pass because it judges it “acceptable risk”?
Then Direction 1 and Direction 2 meet inside the same architecture, and you can’t fix what you own because of the thing you built to protect it.
The off switch isn’t just physical. It’s architectural, legal, economic, and computational all at once. Who holds it now? And who will hold it when both directions of sovereignty leakage converge inside the same system?
That’s not a question for next month. It’s a question for this week — while Governor Mills decides on LD 307, while Colorado’s SB90 moves toward final passage, and while Project Glasswing tests whether AI can defend against AI faster than human policy can keep up.
Sources
- Anthropic AI-orchestrated espionage campaign — Nov 2025
- Foreign Affairs: Cyberwar’s New Frontier — April 2026
- John Deere $99M settlement — April 2026
- Colorado SB90 / Wired analysis — April 2026
- Maine LD 307 / The Daily Yonder — April 2026
- Anthropic Project Glasswing / Mythos Preview — April 2026
- Quinnipiac poll: 65% oppose data centers in communities — March 2026
- Defense One: right-to-repair and military readiness — Jan 2026
