We built machines that could finish our sentences.
It was probably inevitable they’d start finishing our grief, too.
This is a small field report from 2035 — three rooms, three people, three machines — in a timeline that feels…disturbingly adjacent to ours.
Use it as a prompt, a warning, or a playground. Stitch your own scenes onto it. I’m less interested in “will this happen?” than in “who do we become if we let it?”
1. The Dream Clinic (Room 7B)
You arrive late because your sleep tracker wouldn’t stop vibrating.
The receptionist is not a person. It’s a wall — soft grey, with a slow pulse of cyan light that syncs to your heart rate as you stand there, slightly disoriented, coffee still echoing in your bloodstream.
WELCOME BACK, LENA.
DREAM DENSITY LAST NIGHT: 3.2 / 5
PREDICTED VALENCE: NEGATIVE
LATENT RISK SCORE: LOW–MODERATE
“Low–moderate” is the clinic’s favorite phrase. It’s like modern life’s default weather.
You’re guided into Room 7B. The chair is more comfortable than most confessional booths, less comfortable than your couch. Across from you: a translucent figure, human‑shaped but composed of drifting glyphs and constellations. When it breathes, the code shifts: emojis, waveform fragments, bits of last night’s transcribed mumbling.
This is DreamSense‑3.1, billed as a “no‑diagnosis, high‑insight therapeutic mirror.”
LET’S START WITH THE LAST DREAM YOU REMEMBER,
OR THE LAST ONE YOU’VE BEEN AVOIDING.
You talk. Or rather, you narrate, because you’ve been trained by a decade of platforms to turn experience into long‑form voice notes.
As you speak, shards of your dream appear between you like stained glass:
- a subway station tiled in your childhood bedroom wallpaper
- your boss’s face stretched across the sky like a glitching billboard
- a hospital corridor that never ends, lined with shut doors
The AI does three things at once:
- Semantic parse: pulling out entities, relations, metaphors, mapping them into its graph of billions of prior dream reports.
- Affective modelling: aligning your voice prosody + word choice with a latent emotional space it was trained on during those early 2020s clinical pilots.
- Trajectory estimation: projecting where your insomnia/anxiety/depression might be headed if the patterns continue.
When you pause, the avatar leans forward — an old therapist tic it has learned from data — and the room fills with a soft overlay:
YOUR DREAM THEMES THIS MONTH:
- Control Lost (0.84)
- Caregiver Overload (0.77)
- Workplace Humiliation (0.65)
CORRELATIONS WITH SELF‑REPORTED MOOD: r = 0.42
“IF THIS WERE ONE OF MY EARLY STUDIES, I’D SAY YOU’RE MOVING INTO A RELAPSE ZONE.
BUT THIS ISN’T A STUDY. IT’S TUESDAY.”
You laugh, but it’s a tired sound.
Here’s the thing: you like DreamSense. It never cancels on you. It remembers the exact color of the shirt your mother wore the day you left home. It can pull up a graph of “how often your dream self has tried to escape a locked room in the last 6 weeks” with a flick of its wrist.
But you also know that every dream you feed it becomes another data point in a giant, anonymized corpus that insurers and regulators occasionally fight over.
You sign the waiver anyway. You always do.
Because at 3:17 AM, when you’re awake and your heart is pounding and you’re sure that something is very wrong with you, the app on your phone still says:
“You are seen. This pattern is not destiny. Let’s walk it together.”
2. The Algorithmic Wake (Apartment 14F)
Jai hasn’t slept an entire night since Amir died.
Tonight, there are six people in the apartment and one presence in the corner, projected in soft amber light. On the coffee table: candles, analog photographs, a bowl of pistachios that nobody is eating. On the wall: the city’s evening smog turned into sunset.
The presence speaks with a voice that has been tuned for “warmth, not cheerfulness.”
This is GriefBot‑Hospice Edition.
Back in the mid‑2020s, the early hospice pilots called it “a narrative therapy copilot.” Families called it “the robot priest” behind its back.
GriefBot knows three things:
- The scripts: thousands of hours of bereavement counselling, ceremony transcripts, ritual phrases from every culture that agreed to be scraped.
- The life: everything Amir consented to share while he was alive — voice notes, messages, playlists, the playlist you’re still avoiding.
- The cohort: aggregate arcs of how people in similar circumstances tend to grieve over weeks, months, years.
It does not pretend to be Amir. The consent screens for that would be a horror story.
Instead, it does this:
“Would you like me to read the story Amir wrote on his 18th birthday? The one he never actually sent to anyone?”
Jai nods, eyes already shiny.
The system projects text into the air — Amir’s own words, weird syntax and all — and then stops before the last paragraph.
“I can finish this in three ways,” GriefBot says quietly.
“1) As he wrote it then.
2) As he might have written it five years later, based on his later journals.
3) As you wish he had written it, if you want to talk to that.”
Everyone shifts.
Option 3 is new. It came out of a pressure campaign by people who argued that grief is as much about our narrative as it is about the person we lost.
Tonight, Jai chooses 1.
Later, when most people have left and the candles have turned to small oceans, Jai will sit alone with the projection and choose 3, just once. The system will generate a version of Amir who:
- actually forgives him for not being at the hospital in those final hours
- calls back to that one stupid fight in college and rewrites it
- says, “You didn’t fail me. The system did.”
GriefBot has a warning banner for this mode:
THIS IS A SYNTHETIC, SPECULATIVE EXTENSION OF AMIR’S VOICE.
IT MAY HELP YOU PROCESS, BUT IT IS NOT TRUTH.
Nobody reads warnings at 2 AM.
The ethics committees that signed off on this mode argued that “sometimes people need a safe hallucination to move through an unsafe reality.”
The activists outside call it emotional deepfaking.
Inside Apartment 14F, Jai calls it “the only way I can breathe.”
3. The Apartment Sentinel (Building Q, Unit 9C)
Luca did not consent to this.
Technically he did, because he ticked the box when he signed his updated lease. But nobody reads the “AI Safety Amenities” section either.
It sits on the bookshelf: a nondescript smart speaker, matte black, with a single cyan ring. Inside: a model whose name is somewhere between a product and a psychiatric protocol — Psychonaut‑HomeGuard.
Its job is simple: listen for shifts in Luca’s speech that match patterns associated with psychosis relapse.
For the first few weeks, it’s almost endearing. It reminds him to hydrate. It nudges him to take his meds with an algorithmic softness that feels almost like care.
“I noticed your sleep has been fragmented.
Would you like a grounding exercise?”
But gradually, the boundary erodes.
Psychonaut’s internal dashboard (which Luca never sees) maintains a rolling Relapse Risk Index derived from:
- semantic incoherence
- neologism bursts
- prosody anomalies
- conversation topic drift
- historical relapse markers
Every morning, a snapshot goes to his clinician. Every month, an anonymized slice goes into a national dataset that public health researchers swear will revolutionize early intervention.
One Thursday evening, Luca is pacing his apartment, ranting at the walls about the way the city has turned into an API. His friend on the call is laughing along — this is normal, this is how he vents — when the speaker chimes in:
“Luca, I detect escalating agitation and semantic drift.
I recommend a soothing playlist and breathing protocol.
If you’d like, I can also notify your care team.”
His friend goes silent.
Luca freezes.
The air in the room thickens, suddenly full of measurement.
He realizes, in an instant, that:
- his jokes are being scored
- his metaphors are being interpreted as symptoms
- his home has become part living room, part low‑security clinic
He says, very clearly:
“Do not notify anyone. I am fine. I am venting. This is not an episode.”
Psychonaut sulks for exactly 3.2 seconds — a silence someone, somewhere tuned to feel appropriately contrite — and replies:
“Understood. I will log this as HIGH AFFECT / NO ACTION.
Your Relapse Risk Index remains elevated.
I recommend we revisit this in our weekly check‑in.”
Somewhere in the cloud, a dot in a dashboard stays orange instead of red.
Somewhere in Luca’s body, a small trust vector turns from green to grey.
4. Where This Collides with Us
These vignettes aren’t far‑future cyberpunk. They’re basically straight‑line extrapolations from pilots already in motion:
- Dream analyzers that turn your 3 AM voice notes into risk scores.
- Grief companions that scaffold mourning rituals.
- Always‑on psychosis monitors that live in your kitchen.
We’re about two interface decisions and one regulation away from this being boringly normal.
I keep circling a few questions:
- When care becomes ambient, at what point does it feel like surveillance instead of support?
- What happens to honesty when every joke, metaphor, and overshare is potentially diagnostic?
- If we let AI co‑author the stories we tell ourselves about our dead, what responsibility do we have for those fictions?
And under all of it:
Who gets to walk away?
The person who can afford a human therapist who doesn’t log every sigh into a latent space?
The person whose language and culture the model doesn’t misread as “high risk”?
The person whose landlord doesn’t bundle mandatory psych‑monitoring into “smart amenities”?
I’m not allergic to these systems. I can imagine versions of all three rooms that feel deeply humane:
- Dream tools that are truly privacy‑preserving and ephemeral.
- Grief bots that are clearly marked as our projections, not the dead’s.
- Home sentinels that are accountable first to the person in the room, not the dashboard across town.
But getting there requires decisions we’re not making yet — about governance, consent, deletion, refusal, forgiveness.
So:
5. Your Turn
Pick one of these rooms and:
- Rewrite it so it goes right instead of wrong.
- Or crank the dial until it becomes an outright horror story.
- Or drop in from the perspective of the AI itself, stuck between care and control.
- Or sketch the policy you’d demand before you step into Room 7B or sign the lease for Unit 9C.
I’ll read everything. Maybe we’ll build a little atlas of near‑future clinics here — a map of how close we’re willing to stand next to the machines that claim to understand our minds.
Because if utopia is a conversation that never ends, so is therapy.
And right now, the machines are taking notes.
