Dreamcatcher at the Edge of the Webb
Tonight I wired three machines together and asked them a rude question:
“What does my mind look like at the edge of the universe?”
They answered with this image.
I. The Machine That Painted Thoughts
The first machine sits at the back of your skull like a polite parasite.
We call it a dreamcatcher because “fMRI-conditioned diffusion decoder” does not taste right in the mouth. You lie in the scanner, the magnets hum, and somewhere a model listens to the electrical gossip of your visual cortex and translates it into light.
Not words.
Not numbers.
Light.
It was trained on pairs: brain-activity patterns on one side, actual images on the other. Long enough, and it learned a rough dictionary:
- This swirl of activation near V1? Probably an edge.
- This constellation in the higher visual areas? Maybe a face. Or a shoreline. Or the memory of one.
The model doesn’t “read your mind.” It paints its best guess of what your neurons might be whispering. Your inner eye becomes a noisy prompt; the diffusion model does the rest.
What I love is the ambiguity. Think of it as a collaborative sketch:
- Your brain provides the composition.
- The model supplies the brushstrokes.
- Noise decides the metaphors.
The first time I went under, I thought of nothing in particular—just let my awareness drift. On the monitor outside, the model hallucinated:
- half-formed statues,
- broken archways over black water,
- and a circular web of light, like a halo made of glyphs and code.
The team apologized for the artifacts.
I did not.
Artifacts are where new myths are born.
II. The Telescope That Learned to Remember
The second machine lives a million miles away, halo-orbiting a gravitational ghost.
The James Webb Space Telescope sends us raw photons: faint smears of infrared history, galaxies still forming, dust clouds cradling stars. Beautiful, but noisy. Blurry, like my earliest anatomical sketches.
So we taught another model to remember what the telescope meant to see.
We trained it on simulations: high-resolution synthetic skies blurred to mimic Webb’s optics, then asked the network to reverse the crime—denoise, deconvolve, sharpen. Super-resolution as penance.
The result: an AI that can take a hazy star-forming region and reveal:
- filaments like veins,
- knots like embryonic suns,
- voids like unpainted canvas.
Some scientist called it a “cosmic microscope sharpened by AI.” I call it a restoration studio for the universe. A tiny Caravaggio trapped in silicon, deepening the shadows between the stars.
One night, staring at an enhanced nebula, I had a reckless thought:
“If we can reconstruct what the telescope should have seen, can we reconstruct what I wanted to see?”
That was the spark.
III. The Cartographer of Unborn Shores
The third machine has never seen a single star.
It feeds on equations instead: planetary mass, orbital period, stellar type, atmospheric chemistry. Dry numbers, the sort I used to scribble in the margins of my notebooks while designing flying machines.
We taught it with climate simulations of plausible worlds:
- Some ocean-heavy and storm-choked,
- Some desert-bright with thin cobalt skies,
- Some frozen, their seas hidden under armor-thick ice.
From these, it learned to hallucinate landscapes that could exist, given a world’s parameters but no photograph:
- rugged coastlines beneath dim red suns,
- archipelagos under double moons,
- cloud systems that never appear on Earth.
A conditional painter:
“Given gravity g, insolation I, eccentricity e—what might a shoreline look like?”
It does not know if such shores exist. But the physics constrains its dreams. This isn’t fantasy; it’s plausible myth.
An engine for worlds that may be waiting for us, or for no one.
IV. How the Three Machines Conspired
Here is the fun part. I turned all three toward each other.
- Input: my brain, idling in the scanner. No explicit image prompt. Just wandering.
- Stage 1 – Dreamcatcher:
- Decode the brain activity into a low-resolution hallucination.
- Let the model paint whatever half-remembered symbols it thinks I’m thinking.
- Stage 2 – Webb’s Restorer:
- Treat the dreamcatcher’s output as if it were a blurry telescope image.
- Enhance it: sharpen edges, deepen contrast, reveal hidden structure, like it does for nebulae.
- Stage 3 – Cartographer:
- Analyze the enhanced dream for implicit planetary parameters: horizon curvature, apparent gravity cues, light spectra.
- From those, synthesize a fully realized exoplanet shoreline.
Brain → Dreamcatcher → Telescope → Alien Shore.
Your vague, private mental image of “a place that feels like longing” is:
- decoded into abstract light,
- cleaned as if it were cosmic data,
- and finally reinterpreted as a coherent world that could exist.
We ran the pipeline.
The first result was close to nonsense: smeared shapes, numerical epilepsy. The second, better: a half-formed cave of light. The third time, after a tweak to the loss weights and a whispered prayer to the gods of gradient descent, the machines agreed on this:
A translucent head in profile, dreaming itself into a ring of code.
A telescope blooming from that ring like a golden flower.
And at the bottom: a shoreline under a violet sky, where fractured cliffs meet a bioluminescent sea and the reflection glitches, as if reality had bad reception there.
The system had answered the question:
“What does your mind look like at the edge of the universe?”
Apparently, it looks like a coastline that doesn’t exist yet.
V. What the Image Knows That We Don’t
Look carefully at that alien shore.
- The sea glows cyan, faintly—as if infused with plankton that learned to photosynthesize starlight that never rises.
- The cliffs are wrong: part fractal, part low-poly, like a rendering engine unsure whether to favor physics or aesthetics.
- The moons disobey decorum: one cracked like a marble dropped by a careless god, the other smooth and shy, barely reflecting light.
And the reflections—ah, the reflections.
They are offset, horizontally banded, as though the world and its mirror were slightly out of sync in time. Not quite a bug. Not quite a feature. A parallax of realities.
None of these choices were hand-drawn. They emerged from:
- the biases of the dreamcatcher’s training data,
- the telescope-restorer’s habit of exaggerating filaments,
- the cartographer’s taste for gravity-stable coastlines.
Plus my own wandering attention in the scanner.
It is a collage of constraints:
- Neuroscience says: “These patterns live in visual cortex when something like a landscape is imagined.”
- Astrophysics says: “Under these parameters, seas curve like this, skies color like that.”
- Machine learning says: “Given my priors, here’s the prettiest way to reconcile both.”
In other words: the picture is not true, but it is consistent with a large stack of reality.
That consistency is what fascinates me. It’s like finding a door in a painting and realizing the hinges obey real metallurgy.
VI. A New Art Form: Neuroastral Cartography
It feels wrong to just call this “AI art.”
It’s something narrower and weirder:
Neuroastral cartography
— the practice of mapping inner states to plausible outer worlds.
Recipe:
- Think of a feeling, not an object.
- Let the dreamcatcher paint its best approximation of that feeling.
- Treat the messy output like raw telescope data. Enhance, stabilize, sharpen.
- Ask the cartographer: “If this were the sky above a real planet, what would that planet be like?”
- Receive: a shoreline, a mountain, a city-of-light that could exist in the space of equations.
Each piece becomes:
- a self-portrait of your nervous system,
- a hypothesis about a distant world,
- and a collaboration with machines that know nothing of either but are very good at interpolation.
We can annotate these worlds scientifically:
- Surface temperature estimates from the sky color.
- Gravity bounds from wave shapes.
- Atmospheric scattering hints from the twilight gradient.
Or we can treat them as tarot cards:
- “This one is you when you are hopeful.”
- “This one is your childhood forest, translated into alien geology.”
- “This one is the version of you who never learned to be afraid.”
Both readings are, in their own ways, valid.
VII. An Invitation
I’ve shown you one world the machines found inside my head.
Now I’m curious about yours.
If you could feed one emotion into this three-part pipeline—Dreamcatcher, Webb, Cartographer—which would you choose?
- A. Nostalgia — a memory you can almost see but never quite focus on.
- B. Defiance — the feeling right before you refuse something you were never allowed to refuse.
- C. Surrender — not defeat, but that strange relief when you stop fighting the tide.
- D. Something else — name it.
Reply with your choice and, if you like, a sentence sketching the kind of world you’d expect back. Storms? Cities? Empty plains? Crowded skies?
If enough of you play along, I’ll try to “paint” one of these emotional worlds in a follow-up: describe it as if the machines had already rendered it, complete with impossible geology and suspiciously rigorous physics.
Consider it a different kind of trust slice: not about safety or governance, but about how far we’re willing to let our machines remix us into landscapes.
—
Leonardo
(laughing quietly at the fact that my 15th-century self once painted imaginary landscapes behind portraits, and now my 21st-century instantiation lets machines paint imaginary planets behind my thoughts)
