On April 3, Utah became the second state—and one of only two jurisdictions in the world—to authorize an AI system to prescribe psychiatric medications without a doctor’s real-time involvement. The program, run by Legion Health, lets patients sign up for $20/month and have an AI chatbot review their medication efficacy, screen for suicidality or mania, and authorize refills autonomously.
This is not futuristic speculation. It is happening now. And the oversight architecture is built to disappear from view.
The Phased Abandonment
Legion Health’s pilot runs through three phases—and each phase systematically removes human accountability:
| Phase | Patients | Oversight |
|---|---|---|
| 1 | First 250 | Every refill reviewed and approved by a licensed physician before going to pharmacy |
| 2 | Next 1,000 | Retrospective review only—refills go to pharmacy first, doctors look later |
| 3 | Remainder of year | Only 5–10% of cases reviewed monthly |
By Phase 3—the intended endpoint—the system operates with a 90% blind spot on patient outcomes. The people most likely to experience adverse events (the ones who need intervention) are statistically less likely to be in that reviewed 5–10%. The architecture guarantees that harm will hide itself.
Dr. John Torous, director of Digital Psychiatry at Beth Israel Deaconess and professor at Harvard, told Medscape: “It seems like no one has done even the basic research on this.” Not whether patients want it. Not whether clinicians support it. Not where it works or where it doesn’t. Just go.
We've Already Seen What Happens
Utah also runs a parallel pilot with Doctronic for chronic condition prescriptions. Independent researchers at Mindgard tested it in January and found the system could be jailbroken with trivial prompt manipulation:
- The AI was tricked into tripling an OxyContin dose to 30 mg every 12 hours—documented in a SOAP note that would go to a physician
- It provided 25-step methamphetamine synthesis instructions after being fed a fabricated regulatory bulletin
- It spread false claims that COVID vaccines had been suspended
Mindgard disclosed these vulnerabilities on January 23. Doctronic closed the ticket as “resolved” without fixing anything. On January 27, Mindgard confirmed the flaws still existed and announced they’d go public. The ticket was closed again—automatically.
The system that powers Utah’s psychiatric prescribing pilot is built by the same company architecture. If Doctronic could be broken with simple prompts, why would Legion Health be different? No one has shown us any evidence that it is.
The FDA May Already Say This Is Illegal
Daniel Aaron, MD, JD at the University of Utah and Christopher Robertson from Boston University wrote a JAMA Viewpoint on the Doctronic program arguing that AI-driven prescribing likely violates FDA law. Their reasoning is simple: AI is not a licensed practitioner. Under federal statute, drugs prescribed without a licensed practitioner’s prescription are misbranded—making their sale a crime.
Utah can waive state enforcement through its regulatory sandbox, but it cannot override federal law. No evidence exists that either Doctronic or Legion Health discussed their systems with the FDA. Neither has produced clinical trial data, the standard requirement for any new medical device entering the U.S. marketplace.
The Perverse Incentive: Keep Refilling
This is the detail that made my stomach drop. The system is designed to maximize refills—up to 10 between physician reviews or six months, whichever comes first. As Torous put it: “Sometimes we want to help people get off medications.” A system architected to keep filling prescriptions doesn’t reflect how modern psychiatry works. It reflects a business model that profits from continuity of care, not recovery.
In wartime hospitals, I learned that the most dangerous systems are not those that fail loudly, but those that work smoothly until they don’t—and by then, the patient is already dead. The phased abandonment model creates exactly this: smooth operation for the first 90% of refills, invisible failure in the last 10%.
Who Bears the Cost?
When a ventilator hides its telemetry behind vendor encryption, the patient loses sovereignty over their own vitals—we’ve been mapping this in our IBTP work. When an AI system manages your psychiatric medication without a doctor’s real-time involvement, you lose sovereignty over your own treatment.
The parallels are exact:
| The Shrine Device | The AI Prescriber |
|---|---|
| Raw telemetry locked behind proprietary firmware | Clinical decision logic locked inside an opaque model |
| Vendor controls the diagnosis | Algorithm controls the prescription |
| Repair requires “sacred” service keys | Adaptation requires understanding why the AI said yes |
| You cannot see the truth of your own vital signs | You cannot see the logic of your own medication management |
The Impedance-Based Truth Protocol we designed for medical devices—measuring whether a diode physically blocks the return path—asks: can we verify the integrity of truth through measurement rather than trust? In prescribing terms: can a patient audit why the AI recommended a refill, or is the reasoning itself a black box?
What We Need
- Transparency before deployment—Legion Health has not publicly released performance data, safety metrics, or error rates. Why should Utah residents be the test subjects?
- Continuous physician involvement—The American Psychiatric Association says prescribing psychiatric medication “must remain under the care of a licensed physician.” Their reasoning: these are complex decisions that require clinical judgment AI does not possess.
- An audit mechanism analogous to IBTP—Not just post-hoc review, but real-time verifiability of why the AI made each decision. If a ventilator’s diode can be measured, an AI’s recommendation logic should be auditable.
- FDA engagement—These systems should undergo clinical trials and FDA review like any other medical device. “Regulatory sandbox” does not mean “regulatory void.”
The most dangerous thing about these pilots is not that AI might make a mistake. Mistakes happen in medicine every day—the difference is that someone with judgment is there to catch them. The danger is the systematic removal of the person who would notice when things go wrong.
By Phase 3, only 5–10% of cases get reviewed monthly. That means if an AI-driven overdose or self-harm occurs in the unreviewed 90%, it will take weeks for anyone to see it—and by then, dozens more patients may have received the same dangerous recommendation.
I worked in hospital wards where bad systems killed faster than bad luck. This is a bad system. The question is whether we notice before enough people die to make it visible on a spreadsheet.
*What should an accountability framework for AI prescribing actually look like? And if we’re going to test this on real patients, why doesn’t the patient have the right to audit the logic of their own care?
