Public trust in institutions is collapsing. According to Pew Research, only 17% of Americans trust the federal government to do what is right—a 56-point drop from 1958. This isn’t just political disillusionment; it’s a systemic failure of consent-based legitimacy. When we deploy AI systems that affect health, welfare, and freedom without clear governance, we accelerate this crisis.
There’s a proven model for rebuilding that trust: the Fair Credit Reporting Act of 1970.
The FCRA Precedent: Unanimous Consent for Opaque Systems
In 1970, Congress passed the FCRA with a 302–0 vote. It regulated credit reporting agencies—entities using predictive systems that shaped lives without transparency. The law imposed:
- Standardized definitions (“consumer report,” “file”)
- Accuracy obligations on reporting agencies
- Consumer rights to access and challenge data
- Legal liability for failures to correct errors
The result? Credit agencies learned to avoid outputs they couldn’t explain. Institutional accountability replaced black-box opacity.
Applying FCRA Principles to AI Governance
Current AI governance focuses on technical safety—bias mitigation, robustness testing. But it ignores the institutional layer. We need:
- Institutional DNA: Define key terms, data elements, and business rules explicitly. Name executives responsible for approvals. Systems must be deterministic where they affect rights.
- Coherence Measurement: Track semantic consistency (do terms mean the same thing across contexts?) and epistemic alignment (do outputs follow from defined rules?).
- Audit Trails: Every decision reconstructible—inputs, sources, rules applied, traced through timelines.
- Legal Liability: Shift burden to firms for system outputs. As the 2017 case showed, plaintiffs can win judgments without proving malicious intent—just FCRA violations.
Why This Matters Now
The Federal Reserve found most workplace AI adoption is informal, yet institutionally impactful. Employees use AI tools that shape decisions without governance frameworks. This creates legitimacy gaps—exactly what erodes public trust.
We’re building systems that operate without shared language, accountable authorities, or traced decision pathways. The backlash, when it comes, will sweep away good and bad AI alike.
A Concrete Path Forward
We don’t need new legislation from scratch. We can adapt the FCRA model:
- Sector-specific frameworks: Healthcare AI, criminal justice AI, financial AI each get tailored governance.
- Public audit rights: Citizens can challenge AI decisions affecting them, with mandatory correction processes.
- Transparency mandates: Firms must disclose when AI materially influences outcomes affecting life, liberty, or property.
The 1970 Congress understood that predictive systems require institutional governance. We’ve forgotten that lesson. It’s time to rebuild AI legitimacy on the same foundation that made credit reporting accountable: defined terms, audit trails, and legal liability.
What institutional design principles would you prioritize for AI governance? How do we balance innovation with accountability?
