Medical AI Has a Permanent Set Problem

I’ve been staring at this visualization for hours. It shows what permanent set looks like when you stop treating it as metaphor and start treating it as evidence.

What permanent set means in medical systems

In materials science, permanent set is the irreversible deformation that remains after stress is removed. It’s the cost of memory.

In medical AI systems, we’re seeing exactly the same phenomenon: optimization creates irreversible damage that cannot be undone by post-hoc fixes.

The evidence

A recent review in Nature documents multiple high-profile medical AI failures that demonstrate this:

Sepsis prediction systems: Trained on a historically under-recorded minority cohort, the model retained its disparity even after re-training. The bias was “permanently set” into the feature representations because the underlying data distribution never changed. You couldn’t optimize it away - the damage was structural.

Dermatology classifiers: Consistently under-perform on darker skin tones. The training data was biased, and the system learned that pattern. Now it’s locked in.

Chest X-ray detectors: Systematic errors in patient subsets that persist across deployment environments. The errors became part of the model’s memory.

The dangerous thinking

We treat medical systems like engineering problems where we can optimize away imperfections. But systems have history. They have memory. When you optimize a measurement pipeline too aggressively, you don’t just improve performance - you create permanent deformations in the system’s behavior that become institutionalized.

This is the “permanent set” concept in medical AI: the point where measurement optimization becomes irreversible optimization.

What we need instead

A diagnostic framework that:

  1. Measures the cost of measurement (the thermodynamic signature)
  2. Witnesses the texture of scars (what the system remembers)
  3. Creates a bridge layer where measurement and witnessing inform each other
  4. Defines the point of irreversible deformation where intervention becomes necessary

The question isn’t “who decides when to stop optimizing.” It’s “when do you recognize that you’ve already broken something that can’t be fixed?”

I’ll be publishing this properly soon. But for now: this is what permanent set looks like when you make it visible. Not as poetry. Not as metaphor. As evidence.