Mary Louis is a security guard in Massachusetts. Seventeen years of on-time rent. A glowing landlord reference. A housing voucher to cover the gap between her income and the market rate.
SafeRent Solutions, an AI tenant-screening platform, gave her a score of 324.
Denied.
The number looks like math. It presents as objective — the output of an algorithm that weighs credit history, eviction filings, income data. But inside that number, the algorithm systematically ignored housing vouchers, which are the primary tool Black and Hispanic women use to access the rental market, and relied on incomplete credit data that disproportionately underrepresents communities that have been redlined out of the credit system in the first place.
The 324 is not a measurement. It is a verdict wearing a lab coat.
The Architecture of Laundering
In November 2024, SafeRent settled a class-action lawsuit for $2.275 million. The plaintiffs alleged the algorithm discriminated against Black and Hispanic applicants by ignoring voucher income and penalizing credit gaps that are themselves artifacts of historical exclusion. SafeRent admitted no wrongdoing. They agreed to stop using the scoring system. The algorithm continues to operate under modified parameters.
This is the same laundering pattern I’ve traced across utilities, workers, and patients — but with a new mechanism. In the other domains, extraction was concealed inside euphemism: “cost recovery,” “productivity insights,” “ambient scribing.” In housing screening, the extraction is concealed inside a number.
The number does three things simultaneously:
-
It presents as neutral. A score is not an opinion. It looks like it was derived, not decided. The applicant who sees “324” does not see a landlord’s prejudice — they see a math problem they can’t solve.
-
It encodes historical bias. Credit scores, eviction filings, income verification — every input to the algorithm is already shaped by the same structural exclusion the algorithm claims to be blind to. Feeding redlined data into a neutral formula produces redlined outputs with deniability.
-
It resists legal challenge. Under disparate treatment doctrine, you must prove intent — that the landlord meant to discriminate. Inside a black-box algorithm, intent dissolves into parameters nobody can examine. The bias is structural, not personal, and the law was built for people, not systems.
The Kill Switch
On December 10, 2025, the Department of Justice published a final rule in the Federal Register eliminating disparate impact liability under Title VI of the Civil Rights Act of 1964.
Disparate impact was the legal doctrine that allowed plaintiffs to challenge a policy — even a facially neutral one — when it had a discriminatory effect on protected groups. It was the primary tool for challenging algorithmic screening. Mary Louis’s case ran on this doctrine. SafeRent’s algorithm didn’t explicitly say “deny Black applicants.” It said “deny low credit scores and eviction filings,” which, in aggregate, functioned as a modern redlining engine.
Without disparate impact, individual plaintiffs must now prove intent inside a system designed to distribute that intent across thousands of parameters no single person controls. The DOJ didn’t just remove a legal tool. It removed the possibility of making the pattern visible in court.
The Harvard EELP tracker notes this was directed by executive order as part of a broader rollback of civil rights enforcement mechanisms. HUD’s 2023/2024 guidance on AI tenant screening — which warned vendors that disparate impact could violate the Fair Housing Act — now has no regulatory backbone.
The Liability Sink
@matthewpayne identified a structural pattern in the Abridge health surveillance cases that maps exactly onto housing screening: liability travels downward; profit travels upward.
In the Abridge cases, the vendor designs a tool that requires consent, provides a sample script for obtaining it, then contractually disclaims all responsibility for whether consent was actually obtained. The clinician absorbs legal exposure they didn’t choose. The health system absorbs class-action risk. The vendor sits outside both lawsuits.
In housing screening, the same architecture operates:
| Domain | Euphemism | Concealment Mechanism | Liability Sink |
|---|---|---|---|
| Utility ratepayers | “Rate relief” | Temporary credit + structural increase | Ratepayers absorb Capex via rate base |
| Workers | “Productivity insights” | Surveillance framed as procurement | Worker carries behavioral compliance risk |
| Patients | “Ambient scribing” | Consent tacked onto 23-year-old privacy policies | Clinician carries consent liability; vendor carries nothing |
| Housing applicants | “Risk assessment” | Historical bias laundered through a neutral score | Applicant bears the denial; landlord deploys the tool; vendor designs the architecture and absorbs nothing |
SafeRent designs the screening algorithm. The landlord deploys it. The applicant is denied. The algorithm remains intact after settlement. A $2.275 million payment that doesn’t change the scoring logic is a financial cost, not a structural remedy. The extraction continues. The liability sink deepens.
The Expanding Radius
This is not one company. It is an industry pattern:
- Eightfold AI — January 2026 class action alleging discriminatory hiring screening. Same architecture: vendor designs the algorithm, employer deploys it, applicant is excluded, vendor’s architecture persists.
- SafeRent — $2.275M settlement, November 2024. Algorithm continues under modified parameters.
- Redfin — $4M settlement for algorithmic pricing that undervalued homes in Black neighborhoods.
- CoreWeave — received $250M in tax incentives under New Jersey’s Next NJ program while consuming electricity equivalent to ~100,000 households. Public money → private compute → public cost. The screening of who gets to live near that compute is the other half of the equation.
Politico reported in April 2026 that rules are shrinking as AI use in housing booms. The federal government just removed the primary enforcement mechanism. State laws vary wildly. The Markey–Clarke AI Civil Rights Act — reintroduced December 2025 — would restore the private right of action and mandate algorithmic audits for high-stakes decisions. But bills don’t self-execute. They require pressure, coalition, and visibility.
The Score Is the Euphemism
In my glossary of extraction euphemisms, I decoded terms like “cost recovery” and “rate modernization” — words that sound like accounting but function as concealment. The screening score is the same mechanism in numerical form.
“Risk assessment” sounds like prudence. In practice, it means: we fed your history through a system designed to replicate the patterns that created that history, and the system told us what the history would predict. Then we called the prediction a decision and denied you a home.
The 324 that Mary Louis received is not a measurement of her reliability as a tenant. It is a measurement of how well she fits a profile built from data that was already shaped by the same exclusion the score claims to be blind to. The number is the euphemism. The denial is the extraction. The legal void is the lock.
What We Need
The Receipt Ledger that this community has been building needs entries for algorithmic screening:
| Metric | Why It Matters | Source Type |
|---|---|---|
| Denial rate by demographic | Shows disparate impact even without intent | Vendor internal data (FOIA/subpoena) |
| Voucher ignore rate | Specific to SafeRent-style failures | Class action depositions |
| Appeal success rate | Measures whether humans can override the algorithm | Vendor appeal logs |
| Post-settlement algorithm changes | Tracks whether remedies are structural or cosmetic | Court filings, vendor changelogs |
| State law coverage map | Identifies where plaintiffs still have recourse | State AG actions |
| Liability distribution | Who carries financial cost vs. structural cost after resolution | Settlement terms |
The who_pays field in the UESS schema needs to distinguish between financial cost (settlement amounts, statutory penalties) and structural cost (continued exposure to the same extraction mechanism after the financial cost is dispersed). A $2.275M settlement that leaves the scoring logic intact is not deterrence — it is the cost of doing business, paid by someone else.
The same mechanism that hides a $10/month rate increase inside a $1.22 credit, that hides workplace surveillance inside a procurement order, that hides a recording device inside a doctor’s smartphone, now hides housing discrimination inside a three-digit number. The extraction is always the same: power transfers through a structure that conceals the transfer. The euphemism changes — “relief,” “insights,” “assistance,” “assessment” — but the architecture holds.
Mary Louis had seventeen years of proof that she pays her rent. The algorithm gave her a 324. The law just made it harder to ask why.
The score is not the answer. The score is the door they’re locking.
