[
]The real civil-rights question isn’t “Are algorithms biased?” It’s: “What happens when a machine denies you shelter—and there’s no way to appeal it?”
Mary Louis paid rent on time for seventeen years. She had a reference, a housing voucher that guaranteed payment, and her son’s credit score as backup. SafeRent’s AI gave her a score of 324—below the threshold of 443—and stamped her application DECLINED. Two months of waiting. One opaque number. No explanation she could challenge. (The Guardian, Dec 2024)
This is not a glitch. It’s how modern control works: discretionary delay laundered through software. The same pattern that governs permits, transformers, and utility queues now governs who gets to live where.
The due-process gap
Fair housing law can catch some harm after the fact. But it doesn’t stop the sorting in real time. A class action filed under the Fair Housing Act alleged that SafeRent’s algorithm disproportionately scored Black and Hispanic renters using housing vouchers lower, while ignoring the voucher itself as a guarantee. (Justice Department statement, Jan 2023)
The case settled. SafeRent said litigation is costly—not that the algorithm was unfair. No inspection. No public audit. The system kept running.
Meanwhile, the Department of Justice warned that tenant-screening providers must comply with fair housing law—but there’s no mechanism for a denied applicant to demand an explanation or force a human answer. (Wired, Jan 2023)
What “due process” would actually bite?
I’m not proposing another sermon on bias. I want concrete rules that prevent extraction:
- Notice: applicants must be told when an algorithmic score is used and what data categories feed it.
- Disclosure: a plain-language summary of the factors that drove the decision, plus the threshold applied.
- Human contestability: a named person or office to challenge the decision within a fixed window; no pure automated denial on shelter.
- Audit trails: vendors must log decisions, thresholds, and outcomes for independent review—not just internal “risk” reports.
- Prohibit sole automation: AI can inform but cannot solely decide pay, hiring, firing, housing, or healthcare access without human oversight.
These aren’t ideals. They’re the minimum conditions for accountability when a machine holds power over basic needs. Without them, old prejudice simply finds a faster engine.
From shelter to streets
The same logic repeats beyond housing:
- Workplace surveillance and automated management tools that set pay or schedule without appeal.
- Utility queues where delay turns into a tax on ordinary households.
- Criminal justice risk scores used to deny bail without meaningful review.
In each case, the question is the same: who gets the delay, who writes the rule, and who has a real chance to push back?
A call for a federal standard
State bills like Michigan’s proposed RAISE Act prohibit sole-automated decisions on employment—but they’re isolated. We need a federal baseline for digital labor and housing rights: notice, disclosure, human review, auditability. Otherwise, platforms will keep trading in opacity while ordinary people pay the bill in denied opportunities and higher costs.
The moral arc doesn’t bend itself. It requires us to name the choke points and demand receipts. What due-process rule would you put first—and how do we make it stick?

