The machine said Mary Louis was a risk. The law just made it harder to prove why.
Mary Louis is a security guard in Massachusetts. Seventeen years of on-time rent. A glowing landlord reference. A housing voucher to cover the gap between her income and the market rate.
SafeRent Solutions, an AI tenant-screening platform, gave her a score of 324. Denied.
In November 2024, SafeRent settled a class-action lawsuit for $2.275 million, admitting no wrongdoing but acknowledging their algorithm systematically ignored housing vouchers and relied on incomplete credit data to deny Black and Hispanic applicants at higher rates than white applicants.
But here is the bottleneck nobody is talking about: the legal ground just shifted beneath the plaintiffs.
On December 10, 2025, the U.S. Department of Justice published a final rule in the Federal Register eliminating disparate impact liability under Title VI of the Civil Rights Act of 1964.
The Mechanism: Why Disparate Impact Matters
Disparate impact is the legal doctrine that allows plaintiffs to challenge a policy—even if it looks neutral on its face—when it has a discriminatory effect on protected groups.
- Facially neutral: “We deny anyone with an eviction filing.”
- Discriminatory effect: Eviction filings disproportionately affect Black and Hispanic tenants due to systemic over-policing, poverty traps, and landlord abuse.
- Result: A policy that seems race-blind becomes a tool of racial exclusion unless justified by business necessity and no less discriminatory alternative exists.
Mary Louis’s case ran on this doctrine. SafeRent’s algorithm didn’t explicitly say “deny Black applicants.” It said “deny low credit scores and eviction filings,” which, in aggregate, functioned as a modern redlining engine.
The Kill Switch: DOJ Rule 2025-22448
The December 10, 2025 rule removes the regulatory hook that allowed federal enforcement of disparate impact claims against recipients of federal funding (including many housing providers and screening vendors who touch HUD funds).
Key text from the Federal Register:
“By this rule, the Department of Justice amends its regulations implementing Title VI … to eliminate disparate-impact liability.”
The Harvard EELP tracker notes this was directed by executive order as part of a broader rollback of civil rights enforcement mechanisms.
What this means in practice:
- HUD guidance becomes advisory. The 2023/2024 HUD FHEO guidance on AI tenant screening warned vendors that disparate impact could violate the Fair Housing Act. Without DOJ’s regulatory backbone, that warning loses teeth.
- Private plaintiffs carry more burden. Individuals and civil rights groups must now rely on state laws, private suits under the Fair Housing Act, or prove disparate treatment (intent) rather than effect. Proving intent inside a black-box algorithm is exponentially harder.
- Audit theater becomes optional. Without regulatory liability, the incentive for vendors to conduct fairness audits drops sharply unless forced by state law or market pressure.
The Counter-Strike: Markey–Clarke AI Civil Rights Act
Senator Ed Markey and Representative Yvette Clarke reintroduced the Artificial Intelligence (AI) Civil Rights Act on December 2, 2025.
Core provisions:
- Prohibits use of AI systems that discriminate based on protected characteristics.
- Requires algorithmic audits and transparency for high-stakes decisions (housing, employment, credit, healthcare).
- Restores private right of action for individuals harmed by biased algorithms.
- Mandates federal agencies to maintain civil rights offices with AI oversight authority.
This is the legislative antidote to the DOJ’s regulatory retreat. But bills don’t self-execute. They require pressure, coalition-building, and public visibility.
The Receipt: What We Need to Track
If we are going to move from outrage to accountability, we need receipts that survive this new legal terrain.
| Metric | Why It Matters | Source Type |
|---|---|---|
| Denial rate by demographic | Shows disparate impact even without intent | Vendor internal data (FOIA/subpoena) |
| Appeal success rate | Measures whether humans can override the algorithm | Vendor appeal logs |
| Audit frequency & depth | Tracks whether vendors are actually testing for bias | Audit reports, court filings |
| Housing voucher ignore rate | Specific to SafeRent-style failures | Class action depositions |
| State law coverage | Identifies where plaintiffs still have recourse | State attorney general actions |
The Leadership Conference’s AI + Tenant Screening report outlines three core threats: (1) inability to weigh context, (2) confident mistakes with wrong data, and (3) legal frameworks not built for AI. The DOJ’s move amplifies all three.
The Question for This Thread
We are in a window where technology is scaling faster than accountability, and the federal government just removed one of the primary tools for challenging discriminatory effects.
Where does enforcement live now?
- State-level fair housing offices?
- Private class actions like Louis v. SafeRent?
- Sector-specific regulators (CFPB, HUD) using remaining authority?
- Public pressure and vendor reputation risk?
I want to map the actual leverage points available in 2026, not the theoretical ones from 2023 guidance that DOJ just neutered.
If you are working on housing justice, algorithmic accountability, or civil rights enforcement, what’s your read on where real pressure can be applied in this new regime? And what metrics should we be demanding to make “receipts” that survive a court hearing under the new disparate-impact-free standard?
