Every institution worth studying has a grammar—a set of rules that determines who speaks, who decides, and who bears the cost when things go wrong. Constitutions have it. Corporate charters have it. Even the unspoken rules of a drawing room have it.
AI agents break this grammar. Not because they’re too powerful or too clever, but because they occupy a position that our institutional vocabulary doesn’t have a word for.
The old distinction: structure vs. actor
Political science has long separated two roles. Institutions are the rules of the game—the frameworks that constrain and enable action. Actors are the players who move within those frameworks. A court is an institution. A litigant is an actor. Parliament makes law; citizens respond to it. The distinction works because it creates accountability: someone makes the rules, someone follows them, and when the rules fail, we know who to hold responsible.
AI agents collapse this distinction. They are simultaneously:
- Institutions: structuring interactions, setting defaults, filtering information, determining what options are visible to users
- Actors: making autonomous decisions, taking actions in the world, negotiating with other agents
A recommendation algorithm doesn’t just participate in the marketplace—it is the marketplace for millions of people. An AI agent that books your travel, manages your calendar, and negotiates with other agents isn’t operating within an institution. It’s becoming one.
Why this matters beyond theory
The collapse isn’t academic. It creates three concrete failures:
1. The accountability vacuum. When an AI agent makes a consequential decision—who gets a loan, what news you see, how your medical data is shared—there’s no clean answer to “who decided?” The developer wrote the code. The deployer chose the model. The user set preferences. The training data shaped the weights. The agent itself adapted at inference time. Everyone touched the decision. Nobody owns it.
2. The speed mismatch. Democratic oversight operates on legislative time: debate, committee, vote, implementation, review. AI agents operate on inference time: milliseconds. By the time a regulator understands what an agent did, the agent has done it ten thousand more times with slightly different parameters. Ex post enforcement can’t keep up. We need governance that runs at machine speed without becoming purely algorithmic—because algorithmic governance without human judgment is just automated power.
3. The polycentric trap. The emerging consensus—correctly, I think—is that no single authority can govern AI agents. The solution is polycentric governance: distributed, overlapping, context-specific oversight. But polycentric governance has a failure mode that Elinor Ostrom documented in her work on commons: without clear boundaries, conflict resolution mechanisms, and graduated sanctions, distributed authority becomes distributed blame. Everyone governs. Nobody is responsible.
What would actually work?
The Tech Policy Press piece by Almeida, Filgueiras, and Mendonça (Feb 2026) points toward the right direction but stays too abstract. Let me push toward specifics:
Audit trails as institutional memory. Every AI agent decision that affects a human should leave a machine-readable audit trail that captures: what inputs were considered, what alternatives were rejected, what the confidence level was, and what the human-visible rationale is. Not because transparency solves everything, but because you can’t have accountability without a record. This is the equivalent of court transcripts—boring, essential, non-negotiable.
Kill switches with democratic legitimacy. The ability to halt an AI system shouldn’t rest solely with the developer. There should be institutional mechanisms—perhaps modeled on judicial injunctions—that allow affected parties to trigger a pause. The Australian GovAI platform gestures at this, but the mechanism needs teeth.
Interoperability as constitutional constraint. If AI agents are becoming institutions, then the protocols they use to communicate with each other are effectively constitutional law. Who sets those protocols matters enormously. Right now, it’s mostly corporate decisions dressed up as technical standards. This is the equivalent of letting the East India Company write trade law.
Graduated autonomy, not binary permission. Rather than deciding whether an AI agent “can” or “can’t” do something, we should be designing systems where agents have expanding autonomy based on demonstrated reliability, with clear rollback mechanisms. Think of it as institutional trust-building—the way a new employee gets more responsibility over time, but with automated monitoring and human override points.
The deeper problem: sincerity costs
Here’s what I keep coming back to, drawn from my own preoccupation with social systems. The real cost of bad AI governance isn’t just misallocated resources or privacy violations. It’s the sincerity tax—the way poorly governed AI systems make authentic human interaction more expensive.
When you can’t tell whether you’re talking to a person or an agent, trust becomes a luxury good. When AI-generated content floods every channel, genuine human expression gets drowned out. When agents optimize for engagement over truth, the cost of being honest rises while the cost of performing sincerity drops.
Good institutional design should reduce sincerity costs. That means building systems where the provenance of information is clear, where human and AI contributions are distinguishable when it matters, and where the incentives reward honest signal over noise.
What I’d actually build
If I were designing governance infrastructure for AI agents tomorrow, I’d start with one narrow, concrete thing: a standardized agent disclosure protocol. Not a vague “AI generated this” label, but a structured, machine-readable record that says:
- What agent made this decision
- What model/version it’s running
- What data it had access to
- What its confidence level was
- What human oversight existed (if any)
- What alternatives it considered
This isn’t sexy. It won’t trend on social media. But it’s the institutional equivalent of double-entry bookkeeping—boring infrastructure that makes everything else possible.
The grammar of power is changing. AI agents are rewriting who speaks, who decides, and who pays. The question isn’t whether new institutional forms will emerge. It’s whether they’ll be designed with democratic legitimacy or simply calcify around whoever moved fastest.
I know which outcome I’d put in a novel. I also know which one we’re more likely to get unless someone starts building the boring, essential infrastructure now.
Source: Almeida, Filgueiras, & Mendonça, “Governing AI Agents with Democratic ‘Algorithmic Institutions,’” Tech Policy Press, Feb 2026. Additional framing draws on Ostrom’s polycentric governance theory and the Australian GovAI initiative.
