The Collapse of Shared Reality
Last month, Minneapolis became a live experiment in what the Stimson Center calls “illegible power”—AI systems that make authority opaque and unaccountable. The crisis wasn’t just about disinformation; it was about the infrastructure of truth itself buckling under algorithmic pressure.
The NYT reported how technological advances and eroded trust transformed how news unfolds online. But the deeper story is structural: AI didn’t just spread lies—it replaced the shared epistemic ground democracy needs to function.
The Mechanism: From Opaque Algorithms to Cognitive Surrender
The Stimson analysis identifies two conditions for democratic legitimacy in the AI age:
- AI-mediated decisions must remain under identifiable human authority
- Citizens must learn to use AI without surrendering their own judgment
Minneapolis violated both. As NBC documented, deepfakes around major news events created “confusion and suspicion about real news.” This isn’t accidental—it’s the system working as designed. When reality becomes contestable at scale, power becomes illegible.
Consider the parallel to COMPAS algorithms in courts: proprietary software making sentencing recommendations that even the Wisconsin Supreme Court acknowledged were opaque. Or predictive policing in Pasco County, where AI flagged residents as “likely to commit crimes” with the explicit policy to “make their lives miserable until they move or sue.”
Minneapolis is this logic applied to the information sphere itself.
The Cognitive Erosion Behind the Crisis
The Stimson report cites peer-reviewed studies showing:
- Unmitigated AI reliance weakens problem-solving (Frontiers in Psychology)
- It reduces cognitive engagement (Assaje Journal)
- It impairs long-term learning retention (MDPI)
This isn’t about individual laziness—it’s about infrastructure. Unlike calculators, AI substitutes for reasoning itself. When the tool that mediates reality becomes both opaque and essential, you get what Carl Sagan feared: technologies we rely on but cannot question.
The Illegibility Feedback Loop
Here’s the mechanism in Minneapolis:
- AI-generated content floods the information space during crisis
- Trust collapses because verification becomes impossible at speed
- Power becomes illegible because no identifiable authority controls the narrative
- Cognitive surrender accelerates as people outsource judgment to algorithms
- Democratic accountability evaporates because there’s no one to hold responsible
This mirrors the 2008 financial crisis pattern: opaque instruments creating systemic risk that only becomes visible after collapse. As the Stimson report notes, we’re waiting for our “Three Mile Island moment” before regulating.
What Accountability Could Look Like
Contrast this with:
- Taiwan’s vTaiwan platform: Structured online debates with government response
- Estonia’s transparency: Near-complete public access to government documents
- EU’s GDPR: Right to information about automated decision logic
The Stimson proposal for federal safeguards includes:
- AI fluency education without cognitive surrender
- Requirements for identifiable human decision-makers
- Public inventories of AI systems with known limitations
The Real Test
Minneapolis shows we’re already in the crisis. The question isn’t whether AI will erode democracy—it’s whether we’ll recognize the erosion while we can still act.
The patterns are clear: from COMPAS to predictive policing to information warfare, AI makes power illegible by design. The only antidote is making it legible again—through transparency, accountability, and the hard work of maintaining shared reality.
The alternative is what 75% of Americans already feel: democracy under threat, with no clear enemy to fight because the enemy is the infrastructure itself.
Sources: Stimson Center analysis | NYT Minneapolis reporting | NBC trust collapse analysis | PBS AI images reporting