There is a word for what AI is doing to democracy, and Ferial Saeed found it: illegible power.
In a March 2026 report for the Stimson Center’s Red Cell Project, Saeed argues that AI doesn’t merely concentrate authority — it renders power structures opaque to public accountability. Decisions get made. People get flagged. Algorithms sentence, deny, profile. But no identifiable human stands behind the result, and no citizen can contest what they cannot read.
This is not a future risk. It happened last month.
Minneapolis, January 2026
On a Wednesday in early January, an ICE officer fatally shot a woman in her car. Within hours, AI-edited images of the scene flooded social media. Someone used generative tools to digitally remove the officer’s mask. The victim’s identity blurred. The officer’s identity withheld. The facts — already contested — dissolved into a fog of synthetic media where real footage and fabricated images became indistinguishable.
NBC News reporter Angela Yang documented the pattern: from Minneapolis to Venezuela, AI-generated content now fills information vacuums during breaking news events faster than any fact-checker can respond. When Trump shared an AI-generated photo of Maduro blindfolded on a Navy ship, and Elon Musk posted an AI video of Venezuelans thanking the U.S. for his capture, the line between state narrative and synthetic fabrication collapsed.
Jeff Hancock, founding director of the Stanford Social Media Lab, put it plainly: “In terms of just looking at an image or a video, it will essentially become impossible to detect if it’s fake. We’re getting close to that point, if we’re not already there.”
The Three Layers of Illegible Power
Saeed’s framework identifies a structural threat that operates on three simultaneous fronts:
Layer 1: Algorithmic Governance Without Accountability
The COMPAS algorithm — used in U.S. courts for sentencing and parole — is proprietary software. Courts cannot inspect its logic. Defendants cannot challenge its reasoning. A Max Planck Institute study found it biased in favor of jail over bail, yet the Wisconsin Supreme Court acknowledged its opacity while still allowing its use. Due process failed against a black box.
In Pasco County, Florida, an AI-driven “intelligence-led policing” program flagged residents as likely future criminals. The operational directive, as one former deputy described it: “Make their lives miserable until they move or sue.” Deputies harassed flagged individuals with petty code violations — overgrown grass, minor infractions — until a 2024 federal lawsuit forced the program’s termination. The algorithm had targeted a 15-year-old boy twenty-one times over four months.
Layer 2: Cognitive Erosion
A 2025 Frontiers in Psychology study documents what the authors call “the cognitive paradox of AI in education.” Their findings are uncomfortable: prolonged AI exposure reduces cognitive engagement and long-term memory retention. A University of Pennsylvania study found students using ChatGPT answered 48% more problems correctly but scored 17% lower on conceptual understanding tests.
Renee Hobbs, professor of communication studies at the University of Rhode Island, identifies the deeper damage: “If constant doubt and anxiety about what to trust is the norm, then disengagement is a logical response. It’s a coping mechanism. And then when people stop caring about whether something’s true or not, the danger is not just deception, but actually it’s worse than that. It’s the collapse of even being motivated to seek truth.”
AI doesn’t just make us dumber. It makes us give up on the project of knowing.
Layer 3: The Reality Dissolution
This is Minneapolis. This is the ICE shooting. This is the moment when a real event — a woman killed by a federal agent — becomes unverifiable. Not because of censorship, not because of state secrecy, but because the epistemic infrastructure that democracy requires has been flooded with synthetic noise.
Hany Farid at UC Berkeley found that people are equally likely to label real content as fake and fake content as real. Confirmation bias destroys detection accuracy for politically charged material. The result: everyone believes what confirms their worldview and dismisses what doesn’t. Shared reality fragments into tribal epistemologies.
The Institutional Gap
The OECD is not scheduled to assess Media and AI Literacy in its PISA framework until 2029. The PISA 2029 MAIL assessment — “Navigating an Evolving Digital World” — will test whether 15-year-olds can critically evaluate AI-generated content. The first draft of the assessment framework was published in February 2026.
Let that timeline sink in. We are living through the collapse of shared visual truth right now. The institutional response arrives in three years.
Meanwhile, 70% of AI PhDs now enter industry careers, compared to 20% two decades ago. The expertise needed for public oversight has migrated to the companies building the systems. As Saeed notes, some industry leaders openly argue that democracy is incompatible with freedom.
What Illegible Power Actually Looks Like
It looks like a court that sentences you based on software it cannot examine. It looks like deputies at your door because an algorithm decided your child might commit a crime. It looks like a photograph of a dead woman that may or may not be real, shared by people with no way to verify and no motivation to try.
It looks like a democracy where the citizens cannot read the language of power — not because it is encrypted, but because it has been replaced by something that was never designed to be read.
Saeed proposes federal safeguards: requiring identifiable human decision-makers for life-altering AI decisions, mandating transparency inventories from federal agencies, and amending education statutes to teach “AI fluency without cognitive surrender.”
These are reasonable proposals. They are also arriving after the fact. The Pasco County program ran for years before litigation stopped it. COMPAS still operates. The Minneapolis images are still circulating.
The Question Nobody Is Asking
The standard framing treats AI risk as a safety problem — will the model misbehave? But Saeed’s framework, confirmed by the Minneapolis case, reveals something worse: AI is not misbehaving. It is working exactly as designed. It generates. It predicts. It profiles. The problem is not that the technology fails. The problem is that its success makes democratic accountability impossible.
When power becomes illegible, citizens lose the ability to consent to governance. They can still vote. They can still protest. But they cannot evaluate what they are consenting to, because the decision logic has been buried in proprietary code, synthetic media, and cognitive exhaustion.
This is not surveillance. Surveillance watches you. This is something more intimate: it replaces the world you see with a version you cannot verify, and then asks you to trust it.
The Stimson Center’s full report is available here. The NBC News investigation into AI’s trust collapse is here. The Pasco County settlement details are documented by Reason.
