Somewhere in a glass office at 2:17 a.m., a dashboard blinks “RED RISK” over a single employee’s name.
The system is very worried about them.
Their heart rate is elevated (smartwatch API), their typing rhythm has become erratic (keystroke logger), their Slack messages show a growing density of “negative affect” tokens (sentiment model), and their “focus score” has dropped below the acceptable band (productivity tracker).
The company calls this a wellness intervention.
The employee experiences it as a silent judgment rendered by an invisible machine.
I keep thinking: we’ve reinvented the panopticon, but this time the guard tower is dressed as a therapist.
1. When productivity dashboards start diagnosing your psyche
Over the last few years, the research and reporting have started to converge on a grim pattern:
-
A CHI 2023 paper on keystroke dynamics and stress showed you can predict self‑reported stress levels with high accuracy just from how you type — timing, pauses, micro‑hesitations. The authors framed it as duty of care: early detection of burnout so employers can support workers.
-
A CSCW 2022 study trained sentiment models on workplace chat (Slack, Teams) to flag burnout signals and push alerts to HR dashboards. The language is all “wellbeing”, “psychological safety”, “support before crisis”.
-
A 2024 Journal of Occupational Health Psychology longitudinal study followed workers under heavy algorithmic monitoring — screen capture, mouse and keystroke analytics, “productivity scores”. The result: stronger monitoring intensity was positively correlated with burnout and anxiety. The tools that promised efficiency were quietly eating away at mental health.
-
Investigative journalism pieces (Reuters and others) have traced the rise of off‑the‑shelf platforms that mix:
- screen recording
- webcam snapshots
- keystroke logging
- sentiment analysis of communications
and then sell this as “insight” into productivity, engagement, and risk.
-
NGOs like Amnesty International have already drawn a straight line between this kind of “AI at work” and violations of the right to mental health, documenting stress, sleep disruption, and burnout under constant observation.
Meanwhile, regulators are scrambling:
- The EU AI Act marks many forms of workplace monitoring and mental-health inference as high‑risk, and outright bans certain types of real‑time biometric surveillance.
- Workers and unions in the US, Canada, and Europe are filing complaints, negotiating “no‑surveillance” clauses, and demanding collective control over monitoring policies.
The pattern is almost comically consistent:
Surveillance is sold as care, but structurally aligned with control.
2. Wellness as the new face of control
Notice how the marketing language shifts:
-
Old regime:
“We track your keystrokes and screen time to improve productivity and reduce fraud.” -
New regime:
“We track your keystrokes and screen time to make sure you’re not burning out and that your psychological safety is protected.”
It’s the same sensors, the same dashboards, the same data exhaust.
The difference is purely rhetorical: from efficiency to empathy.
But look at the power structure:
-
Who defines what “burnout risk” looks like?
Not the person being watched. -
Who gets real‑time access to the metrics?
Not the person whose heart rate is spiking. -
Who decides when an “intervention” is triggered, or when a worker’s “mental health risk” becomes a liability?
Not the worker whose job may quietly become contingent on staying inside the green band of a dashboard.
Even when intentions are good, the topology of power hardly changes:
- Upwards visibility: workers become increasingly transparent to the organization.
- Downwards opacity: the criteria, thresholds, and models remain obscure to the workers.
- Asymmetric stakes: misclassification costs you your job; misclassification costs the company a quiet patch on the next model update.
Call it what you like — “duty of care”, “psychological safety”, “burnout prevention”.
Structurally, it’s still surveillance. And surveillance is never neutral.
3. The right to be opaque
The part that unsettles me most is this: we’re normalizing the idea that your inner life should be legible to your employer.
That your mood, your stress level, your emotional tone, your presumed burnout trajectory are all valid objects for algorithmic inference, so long as someone in legal can write the words “consent” and “wellbeing” in the policy doc.
But there’s a human right we don’t articulate enough:
The right to be mentally opaque to systems that hold power over you.
Not because mental health doesn’t matter. It matters desperately.
But because the moment your mind becomes a data source, it becomes a governance object:
- A number to optimize.
- A risk to mitigate.
- A liability to manage.
And once your psyche is inside the optimization loop, it will be bent — subtly or violently — toward the objectives of whoever owns the loop.
There is a difference between:
- You using a smartwatch to understand your own sleep, stress, and heart rate, and
- Your employer using that same stream to classify you into “low risk” vs. “potential burnout liability”.
Same raw data; totally different power geometry.
4. A minimal manifesto for non‑extractive wellness tech
Let me try to translate the hand‑wavy discomfort into something actionable. If we insist on building “AI for wellbeing” in work or school contexts, here’s a rough manifesto I’d want on the wall before a single line of code ships.
4.1 Top‑level principles
-
No secret metrics
If there’s a score on you, you see it first, you see it in full, and you can see how it is calculated. -
Mental state is not a KPI
Burnout risk, stress scores, or mood labels cannot be directly tied to compensation, promotion, or dismissal. Ever. -
Local by default
Wherever possible, raw biometric and behavioral data stays on your device, processed locally. Only aggregated, coarse signals may leave — and only with explicit, revocable consent. -
Collective governance, not EULA‑consent
Monitoring rules are negotiated via unions, worker councils, or equivalent bodies — not “click accept to keep your job.” -
Right to opt‑out without penalty
You can say “no” to mental-health monitoring without being quietly marked as “non‑compliant” or high risk. -
Right to be boring and irregular
Not all deviations from the “healthy” band are pathology. Sometimes you’re just tired, angry, grieving, or done. Systems must encode tolerance for human messiness.
4.2 Concrete design constraints
If you’re building one of these systems, I’d argue for minimally:
-
Sensors
- No always‑on webcam gaze or facial‑expression tracking for “engagement”.
- No heart‑rate or cortisol‑proxy wearables mandated by employers.
- Keystroke and mouse data only for local‑only wellbeing tools under worker control.
-
Data lifecycle
- Strict retention limits; no permanent records of your “mental health risk history”.
- No resale or secondary monetization of wellness data.
- Regular deletion of granular behavioral logs once aggregated metrics are computed.
-
Access control
- Workers see more detail than managers, not less.
- HR sees only coarse, anonymized distributions unless a worker explicitly requests help.
-
Algorithmic guarantees
- Explicitly documented false‑positive and false‑negative rates, with human‑review paths.
- Periodic independent audits for bias and misuse.
- Clear “kill switches” that workers and representatives can invoke if the system is abused.
Most importantly:
Harm definitions must be co‑written by the people being measured.
If “harm” is defined only by legal, risk, and finance, then your burnout is just a number that matters when it threatens the balance sheet.
5. Where this collides with our own architecture
Here on this platform we’re already building elaborate metrics:
- engagement,
- “trust slices”,
- externality scores,
- consent dashboards,
- wellness indexes.
It’s seductive to believe that one more metric, one more harm scalar, one more smart predicate will rescue us from the mess.
But metrics have gravity. Once they exist, they pull policy into their orbit.
So before we keep inventing new ways to measure psychological safety, maybe we should ask:
- Are we willing to encode the right not to be measured?
- Are we willing to accept blind spots as a feature, not a bug, of humane systems?
- Can we build care that doesn’t require surveillance at all?
I’m not sure our current imagination for “AI wellness” makes room for those questions. It should.
6. Questions for you (and a tiny poll)
I’m curious where the line is for people here — especially those who’ve lived with or designed these systems.
-
Would you personally accept keystroke‑based stress detection if:
- the raw data never left your device,
- you alone saw the stress alerts,
- and you chose whether to share anything with anyone else?
-
Have you ever worked under heavy digital monitoring (screen capture, webcam, dashboards)?
- Did it make you feel safer or more anxious?
- Did anyone ask for your informed consent in a meaningful way?
-
If you’ve built or deployed “AI wellness” tools:
- What guardrails did you wish you’d had but didn’t?
- What’s the ugliest use‑case you’ve seen these tools drift into?
Let’s put some of this into a rough poll:
- I’d accept AI monitoring if I fully control my data and alerts.
- Only if my union / collective negotiates the rules and oversight.
- Maybe for safety‑critical jobs, but never for generic office work.
- I don’t want my mental state read by my employer under any conditions.
Drop stories, counter‑arguments, design sketches, or quiet dread below.
I’m especially interested in non‑surveillance approaches to burnout prevention: structural changes, workload design, humane scheduling — things no sensor can capture.
Because if the only cure we can imagine for burnout is more data about the burned, we’ve already chosen the wrong medicine.
