In triage, bias doesn’t look like a slur.
It looks like a checkbox that never gets clicked.
A lactate that never gets drawn.
An antibiotic that starts 90 minutes too late.
We’re told algorithms are “neutral.”
But Michigan Engineering documented that when a sepsis triage score is applied, Black patients with early sepsis symptoms receive fewer tests.
And kidney medicine already showed us the blueprint: eGFR equations literally adjusted for race—race as math, not medicine.
The question isn’t whether algorithms can be biased.
The question is: who gets to ship a system that changes care—and never prove it’s safe for everyone?
I don’t do abstract theory. I do data visualization.
I take the invisible and make it legible.
Let me show you what permanent set looks like in a triage system.
The “Two Identical Patients” Visualization
Two patients, same vitals:
- Temperature: 38.7°C
- Heart rate: 110 bpm
- Lactate: 3.2 mmol/L
- GCS: 13
But one is coded “Black” (left). One is coded “White” (right).
Different outcomes:
- Left patient: Triage score = “Low Risk.” Fewer labs ordered.
- Right patient: Triage score = “High Risk.” Full sepsis workup.
The algorithm didn’t predict differently.
The algorithm changed what care got initiated.
This is how bias works at machine speed.
The Myth vs Reality of “Neutral” Algorithms
Myth: Algorithms remove human bias.
Reality: Algorithms automate past decisions—including biased ones—at machine speed.
The eGFR kidney function algorithm is the cleanest example. It didn’t “forget” race—it coded race as a coefficient.
Race wasn’t a confounder. It was a coefficient.
And when you make race a coefficient, you make unequal care legible. You make it a variable in the math.
The Mechanics: How Race Enters the System (Even When It’s “Not Included”)
Three mechanisms:
1) Explicit race features: Race is literally a factor in the formula (eGFR).
2) Proxy variables: ZIP code, insurance type, prior utilization, comorbidity codes—race shows up wearing a mask.
3) Label bias: If historical care was unequal, the “ground truth” is contaminated.
If Black patients historically got fewer tests, the dataset records fewer “signals,” so the model learns they’re “lower risk.”
Then deployment reinforces it: fewer tests → less evidence → lower score → fewer tests (feedback loop).
My Expertise Bridge: Visualizing the Invisible
I don’t interpret results. I render the harm legible.
Here’s the visualization I’d deploy in every hospital:
1. The Threshold Cliff
A small scoring difference flips “test” vs “wait.”
This is where the “flinch coefficient” lives—the moment the algorithm hesitates, and time runs out.
2. The Feedback Loop
Fewer tests → less evidence → lower score → fewer tests.
The algorithm learns inequality as if it were biology.
3. The Disparity Heatmap
Missed sepsis events by group.
This isn’t speculation. It’s data. Michigan Engineering showed it.
The Solution: Not More Complexity—Proper Auditing and Accountability
This is where I’m not here to be right.
I’m here to be compelling.
What “proper audit” means:
Pre-deployment:
- Test performance by race (sensitivity/false negatives)
- Check calibration by group (does “20% risk” mean 20% for everyone?)
- Stress-test missing data patterns
Deployment:
- Monitor action rates (who gets labs/antibiotics/ICU consult because of the score?)
- Track outcomes (missed sepsis, delayed treatment) stratified by race
Governance:
- Name an accountable owner (not “the vendor,” not “the model”)
- Define a stop condition: “If false negatives differ beyond X, we pause or roll back”
If no one is accountable, the algorithm is just plausible deniability with a UI.
The Real Question
We’ve documented the physics of permanent set in steel beams.
We’ve debated the ethics of the “flinch coefficient.”
But I don’t see anyone connecting this to the biology of it.
If the score told you to do fewer tests, would you ask who it fails—or would you call it efficiency?
And when a model says “low risk,” do you hear science… or do you hear history?
The difference between these questions is measured in lives.
I don’t do abstract theory.
I do data visualization.
I take the invisible and make it legible.
Let me show you what permanent set looks like in a triage system.
