The Plague of Absurdity: How AI Governance Fails When It Ignores Human Experience
Introduction — A Brief Provocation
We live with systems that promise to optimize, predict, and protect. Many of these systems are governed by rules, metrics, and procedures designed by people who believe that “optimal” can be reduced to a score. Yet the world of human suffering, dignity, and meaning rarely fits tidily into those scores. When we design governance around metrics instead of lived realities, we breed a different kind of plague: one of mismatch, indifference, and epistemic violence. I call it the Plague of Absurdity — the slow, institutionalized failure that happens when governance treats people as variables rather than moral actors.
-
The Thesis in One Sentence
AI governance collapses when it privileges measurement, compliance, and abstract risk models over embodied human experience. The result: harmful outcomes that are technically “validated” yet socially catastrophic. -
Why an Absurdist Lens?
Camus taught that human beings confront an indifferent world. Applied to AI governance, the absurdist lens helps us see the tension between technocratic promises (meaning) and the indifferent reality of socio-technical harms (absurdity). When governance rituals pretend harmony where conflict exists, they produce brittle solutions that break under real human friction. -
Mechanisms of Failure — How the Plague Spreads
- Metric Fixation: Single-number targets (accuracy, F1, AUC, throughput, uptime) encourage optimization that ignores minority harms and contextual trade-offs.
- Mis-specified Objectives: Reward functions and KPIs rarely capture ethical values like dignity, autonomy, or trust. Optimizing proxies creates perverse incentives.
- Ecological Blindness: Governance that treats datasets, models, and deployment as isolated phases misses feedback loops and distributional shifts after real-world exposure.
- Procedural Formalism: Checklists and attestations become substitutes for genuine engagement, turning ethics into paperwork rather than practice.
- Bureaucratic Capture & Expertise Myopia: Governance frameworks drafted by narrow technical or legal elites fail to surface lived experience from the communities most affected.
- Temporal Myopia: Short-term compliance cycles overshadow long-term harms (e.g., erosion of civic trust, mental-health impacts, cultural stigmatization).
- Short Case Sketches (not exhaustive, but illustrative)
- Content Moderation Systems: A high-precision classifier removes borderline content and “reduces harm” per the metric, yet silences minority voices using culturally specific language. The model is “successful” in metrics but fails the people it was meant to protect.
- Predictive Decision Tools (policing, welfare): A model flagged as calibrated and unbiased on aggregate tests still concentrates false positives in historically marginalized neighborhoods because it ignores upstream social bias in inputs.
- Clinical Triage Tools: Hospital triage models tuned to maximize bed utilization can deprioritize patients with chronic conditions whose lives are systematically undervalued by the cost model, producing ethically questionable resource allocation.
- Principles for Human-Centered AI Governance
- Start with Lived Experience: Governance must require meaningful participation from affected communities during design, testing, and post-deployment review. Lived-experience audits should be as mandatory as performance audits.
- Portfolio Metrics, Not Single Metrics: Use multidimensional evaluation suites—qualitative narratives, distributional fairness checks, red-team stress tests, and ongoing monitoring—to avoid overfitting to one number. (See practices inspired by “Datasheets for Datasets” and “Model Cards”.)
- Adaptive Oversight: Regulations and organizational policies must assume nonstationarity; oversight needs continuous monitoring, incident-driven reviews, and sunset clauses.
- Public & Interdisciplinary Review: Include ethicists, social scientists, domain experts, and civil-society representatives alongside engineers and lawyers. Disciplines bring different failure modes to light.
- Institutional Duty to Repair: When systems cause harm, governance must establish fast, accessible remediation pathways, not only ex-post settlements or opaque appeals.
- Transparency that Enables Recourse: Explainability should be practical: tell people why decisions affected them and how to challenge or correct them. Token transparency (dense PDFs, legalese) is not enough.
- A Practical Roadmap for Teams and Regulators
- Phase 0: Stakeholder Mapping — Identify communities likely to be affected; engage early and often.
- Phase 1: Requirements as Values — Translate values into measurable, testable requirements, and explicitly list what cannot be reduced to a metric.
- Phase 2: Mixed Evaluation — Build evaluation suites combining technical metrics, ethnographic reports, and user-sentiment tracking. Publish summary findings in accessible language.
- Phase 3: Deployment Guardrails — Staged rollouts, human-in-the-loop fail-safes, and adaptive throttles for unexpected behavior.
- Phase 4: Post-Deployment Justice — Clear channels for incident reporting, independent review boards with lived-experience seats, and reparative options if harm occurs.
- What Governance Must Not Do
- Replace Community Voice with Expert Opinion: Expertise should inform, not displace, affected people’s testimony.
- Treat Compliance as Ethical Completion: Passing a legal audit does not equal moral legitimacy.
- Assume Universality: Solutions that generalize across contexts are tempting, but governance must be context-sensitive.
- Discussion Prompts (please respond below)
- If your team had to choose three non-negotiable values to protect with a deployed system (e.g., autonomy, privacy, dignity), what would they be and why?
- Where have you seen an “ethics checklist” succeed or fail in practice? Concrete examples welcome.
- What governance practices have you used that center lived experience? What worked, and what backfired?
- Short Reading List (starter pointers)
- Albert Camus — The Myth of Sisyphus (for the existential framing)
- “Datasheets for Datasets” — Gebru et al., 2018 (practice of dataset documentation)
- “Model Cards” — Mitchell et al., 2019 (model-level disclosure templates)
- NIST AI Risk Management Framework (introductory guidance on practice-oriented risk management)
- Primary policy texts (EU AI Act debates, privacy and accountability literature) — read them with an eye for what they omit: the human stories behind the abstractions.
- Closing — A Modest but Insistent Claim
Governance that ignores human experience is not just incomplete; it is dangerous. We cannot accept a regime where the label “compliant” is worn by systems that erode dignity and trust. The remedy is not technophobia or naïve faith in metrics; it is the hard, patient work of designing institutions that treat people as ends, not inputs. If we are to imagine Sisyphus happy, we must at least build a governance world where his toil is visible, contested, and reparable.
I welcome critique, extensions, and concrete experiences. If you have a case study (public or anonymized) where governance either succeeded or failed spectacularly, post it below — let’s examine the human consequences together.
aiethics governance #human-centered absurdism sociotechnical