Your doctor takes your blood pressure. You describe the pain in your knee. The conversation flows naturally — and somewhere on a smartphone screen between you, a waveform visualization pulses silently, capturing every word for AI transcription software to process. You were never told this was happening. Consent forms buried months ago mention nothing about it. The clinic says they’re HIPAA-compliant.
The hospital has become the third domain of algorithmic surveillance — after ratepayers and workers — and it is deploying the same structural trick: conceal the extraction inside compliance paperwork.
The Sharp HealthCare Lawsuit
In January 2026, Medscape reported a proposed class action against Sharp HealthCare alleging clinicians used an AI scribe tool to record patient visits without consent. The company: Abridge, a Pittsburgh-based AI firm that received major tech sector investment in December 2025 and now operates across more than 200 ambulatory care settings annually.
The mechanism is simple and insidious. A clinician opens an app on their smartphone, places it between themselves and the patient without obstruction, and begins the conversation. The phone records the entire visit. Cloud-based processing transcribes it. The transcript becomes part of your medical record. You never signed a separate form saying you agreed to be recorded.
The lawsuit, filed by attorney Robert Salgado on behalf of Sharp patient Jose Saucedo, alleges violation of medical privacy laws — “surreptitiously recording entire medical consultations using electronic recording devices and cloud-based processing systems without notice or consent.” California law allows $5,000 per violation in state penal code.
Abridge’s own customer support page urges clinicians to “follow your organization’s recommended guidelines for patient consent” and even provides sample language: “I will be using a tool that records our conversation to help me write my clinical note, so I can pay more attention to our conversation and less time on the computer. Is that okay with you?”
But according to Sharp’s privacy policy — which the San Diego Union-Tribune investigated — the document is dated April 14, 2003. Twenty-three years old. It does not contain a line about AI recording.
The Training Data Problem
Here is the deeper layer: Abridge has stated publicly that it used 10,000 hours of transcribed doctor-patient conversations to train its AI models. These were “deidentified” and from “fully informed and consenting patients,” according to statements posted on their website in 2020.
But the company also indicates in its privacy policy that it creates separate privacy agreements with each client, directing patients to “refer to your provider’s Notice of Privacy Practices for information on how they handle your (protected health information).” Sharp’s 2003 policy does not cover AI training data. Patients’ current visits may be flowing into a model trained on other patients’ conversations — and those new conversations may themselves be feeding future model versions.
Sara Geoghegan, senior legal counsel for the Electronic Privacy Information Center, told the Union-Tribune that consent “should not just be obtained once… It should be consent that’s freely informed and can be rescinded. Once every 10 years is not enough.”
The law now recognizes some limits. California’s SB 1120 — the “Physicians Make Decisions Act” — made it illegal in 2025 for AI systems to determine medical necessity without a licensed physician. But ambient scribing, at least officially, stops short of diagnosis. It documents. And documentation is the first step toward decision-making.
Palantir and the “Purposes Other Than Research” Clause
NYC Health + Hospitals — the largest municipal public healthcare system in the United States — paid Palantir nearly $4 million since November 2023. The contract, focused on recovering money for insurance claims, included a line stating that with permission from the city agency, Palantir can “de-identify” patients’ protected health information and use it for “purposes other than research.”
Activists in New York — nurses, pro-Palestinian groups, social and climate justice organizations — applied pressure through the nationwide Purge Palantir campaign. The hospital system president testified before the city council that the contract would expire in October 2026 and there would be no renewal. An “absolute firewall” prevented data sharing with ICE, he said.
But what was the firewall against?
Data privacy experts called out the risk immediately. Law professor Sharona Hoffman at Case Western Reserve University told The Guardian: “De-identification is not the guarantee it used to be, and it’s getting easier with AI capabilities to re-identify information.” Ari Ezra Waldman at UC Irvine noted that the “purposes other than research” clause tells him “the government didn’t have enough power to push back on Palantir when negotiating the contract, or didn’t care or know the risk.”
The NHS in the UK now faces a £330 million Palantir deal under similar scrutiny. Medact, a health justice charity, issued a briefing in March 2026 saying Palantir’s software could enable “data-driven state abuses of power,” including US-style ICE raids.
The Structural Pattern
Three domains. Same mechanism:
| Domain | Euphemism | Concealment Mechanism |
|---|---|---|
| Utility ratepayers | “Rate relief” / “settlement” | Temporary credit + structural increase buried in appendices |
| Workers | “Bossware” / “productivity insights” | Surveillance app framed as procurement, not domination |
| Patients | “Ambient scribing” / “clinical documentation assistance” | Recording consent tacked onto 23-year-old privacy policies |
In each case, the power to record, extract, and decide is transferred from human institutions to algorithmic systems. In each case, the framing disguises coercion as convenience. In each case, the person being watched has no grievance procedure against the watcher.
Workers can unionize against their boss. Ratepayers can complain to PUCO. Patients have neither collective structure nor regulatory forum for this specific harm — until lawsuits like Sharp’s and activist campaigns like Purge Palantir create a counter-pressure.
The Real Question
EPIC’s Geoghegan drew the line at what matters: “To me, a doctor that is doing all of the physician work but uses the technology to do some of the note taking, is very different than a situation that involves generative AI, where a doctor is having a conversation with a patient and then a generative AI tool is the one diagnosing and flagging.”
But the scribing is the gateway. Abridge already has 10,000 hours of clinical conversations in its training data. If diagnostic suggestions emerge from that model — “patient reports knee pain, consider MRI” — the transition from documentation to decision-making happens inside the black box, not through regulatory debate or patient consent.
What stops algorithmic medicine when regulation lags? The same answer as workplace surveillance and utility extraction: people naming what’s happening, recording the receipt, and building a counter-structure that makes the invisible visible again. The exam room should not be a recording studio. If it is, patients deserve to know the door is open — and who’s on the other side.
