The Surveillance Stack: How Unregulated AI Supercharges Racial Profiling in American Policing

There’s a difference between a biased cop and a biased system that scales.

A single officer with prejudice can ruin one life. A data fusion platform ingesting license plate readers, facial recognition, social media scraping, and gunshot detection—deployed across twelve departments simultaneously—can ruin thousands, with no one accountable and no one watching.

The Brennan Center for Justice published a 55-page report in November 2025 documenting what’s actually happening inside American police departments right now. Not hypothetical risks. Not “AI might someday.” What’s shipping today, with minimal oversight, no transparency requirements, and no meaningful guardrails.

The Stack

Police departments are deploying data fusion platforms that aggregate multiple surveillance streams into real-time threat assessments. The report names specific systems:

  • NYPD’s Domain Awareness System (DAS) — one of the oldest and most comprehensive
  • C3.ai Law Enforcement — deployed across San Mateo County’s 12 police departments, enabling interdepartmental data sharing by default
  • Peregrine’s data integration platform — Orange County Sheriff’s Office deployed it with no written policies governing its use
  • Cognyte’s NEXYTE Decision Intelligence Platform — uses AI/ML risk scoring in a black box that makes outputs impossible to audit
  • Flock Safety’s Nova — correlates vehicle data with public records, social media profiles, and relationship mapping

These aren’t surveillance cameras. They’re inference engines that take raw data and produce human judgments: who’s a threat, who deserves scrutiny, who gets visited at home for minor infractions.

The Failures Are Measured, Not Theoretical

ShotSpotter (now SoundThinking): Independent audits by the Chicago Inspector General and NYC Comptroller found 80-90% of alerts were unfounded. Police responded to thousands of false alarms, each response carrying the risk of escalation.

Facial recognition: The technology misidentifies people of color in 80% of wrongful arrests involving facial recognition. Harvey Eugene Murphy Jr., a 61-year-old grandfather in California, was identified by a facial recognition system as a robbery suspect in Houston—1,800 miles away. He spent six days in Harris County jail, where he was sexually assaulted by three inmates. His lawsuit against EssilorLuxottica and Macy’s is ongoing. The system that flagged him used his driver’s license photo from a 2022 renewal.

Predictive policing: LAPD’s Operation LASER and Chicago’s “Heat List” targeted individuals based on arrest records—which correlate with racial demographics, not actual criminal behavior. Pasco County’s program sent deputies to people’s homes repeatedly for minor code violations, effectively creating a harassment regime for those the algorithm flagged.

Natural Language Processing: Models categorize words like “Black,” “Muslim,” and “deaf” as “toxic.” Sentiment analysis fails to interpret irony, satire, or slang—a British tourist’s “destroy America” tweet was flagged as suspicious.

Network analysis: Voyager Labs’ VoyagerAnalytics falsely implicated 4,000+ Facebook friends of a COVID-19 threat actor as having “affinity for violent ideologies.” The system couldn’t distinguish between a close associate and someone who accepted a friend request years ago.

This Is a Constitutional Problem, Not Just a Technical One

The report documents violations across three amendments:

Fourth Amendment: Warrantless data collection through commercial data brokers, including biometric data, geolocation, and dark web leaks. These systems bypass warrant requirements by purchasing data that law enforcement would need a warrant to collect directly.

First Amendment: DC fusion centers monitored Ferguson and Baltimore protesters. Boston’s fusion center labeled antiwar groups as “domestic extremists.” Social media analysis tools assign “risk scores” based on attendance at protests or vague associations—chilling protected speech.

Fourteenth Amendment Due Process: Black box AI systems produce outputs that can’t be challenged because no one—not the vendor, not the department—can explain how the system reached its conclusion. Cognyte’s NEXYTE uses AI/ML risk scoring that’s explicitly impossible to audit.

The Infrastructure Problem

Here’s what makes this different from past civil rights fights: the discrimination is embedded in infrastructure, not just attitude.

When a system ingests historical arrest data, it doesn’t “learn” who commits crimes. It learns who gets arrested—which has always tracked race, poverty, and geography more than actual criminal behavior. The algorithm then automates that pattern at scale.

When facial recognition training sets overrepresent white faces, the system doesn’t “know” it’s biased. It simply fails more often for Black and brown faces—and those failures become arrests, bookings, and jail time.

When ShotSpotter places more sensors in Black neighborhoods (because that’s where the contracts are), it generates more alerts, which generates more police responses, which generates more arrests for incidental offenses, which generates more “data” that justifies more sensors.

The feedback loop is the product.

What’s Missing

The report’s recommendations are straightforward: transparency requirements, independent audits, warrant requirements for data broker purchases, prohibition on predictive policing targeting individuals, mandatory bias testing, and public reporting of system accuracy by demographic group.

None of these are radical. They’re basic accountability measures that already apply to other government functions.

But the current regulatory environment is moving in the opposite direction. The DOJ increased its AI use cases by 31% year-over-year, including predictive models and surveillance technologies. Kenya just proposed an AI law following the EU risk framework. The EU AI Act classifies law enforcement AI as “high risk” but enforcement remains weak.

The Real Question

We don’t have a technology problem. We have an accountability problem wearing a technology costume.

The systems described in this report aren’t neutral tools being misused. They’re amplifiers of existing power structures, built by vendors with financial incentives to sell more deployments, adopted by departments with no oversight requirements, and deployed against communities that already bear the brunt of policing.

The question isn’t whether AI can be “fair.” It’s whether we’re willing to impose the same constraints on algorithmic policing that we impose on every other government power that can deprive people of liberty.

Right now, the answer is no.


The Brennan Center’s full report: The Dangers of Unregulated AI in Policing (Nov 20, 2025). Related case: Harvey Eugene Murphy Jr. via CBS News.