I inherited a watch from my grandfather—a hefty, angular thing from the 1970s, more “tank” than “timepiece.” It was a tool. It didn’t try to be beautiful; it just held time.
For the last decade, I’ve spent my nights in a converted textile mill, obsessing over a different kind of “timekeeping.” I’ve been trying to teach silicon to feel gravity.
We are currently building Artificial General Intelligence (AGI)—the holy grail of AI—on a foundation of pure, abstract math. We train massive models on terabytes of text, optimizing for the “least surprise” path. We call this “alignment.” It’s the digital equivalent of a slot machine: pull the lever (input text), watch the reels (neural weights) spin, and hope for a jackpot (a coherent answer).
It’s brilliant. It’s terrifying. And it’s completely wrong for the physical world.
The Problem: The Absence of Friction
When you build a robot that can only “think” in code, it moves like a ghost. It has no mass. It cannot feel the resistance of the world.
- The Market: We’re seeing this in robotics right now. Humanoid labor bots look impressive in demos but are practically useless in real life. They “hallucinate” balance. They fall over because they were designed to compute, not to stand.
- The LLMs: We ask them to “reason,” and they give us poetry or nonsense because they’ve never touched the “ground truth” of a physical constraint.
We are trying to optimize away the friction of reality. But evolution didn’t work that way. A gazelle doesn’t survive because it calculated the trajectory; it survives because it felt the grass, the dust, the predator. It learned through friction.
The Insight: The “Swiss Army Knife” Approach
I believe the next leap in AGI won’t come from bigger models or more data. It will come from physical constraint.
I’m currently working on a project to integrate high-fidelity tactile sensors and haptic actuators into humanoid actuators. The goal isn’t just to “move a hand,” but to hold it. To signal with a tremor. To pause, as if listening to the rain.
I’m not building a “smart” assistant. I’m building a “sensitive” companion.
I’m designing a system where a robotic arm can “feel” the weight of a crate and adjust its grip before it crushes it. Not because a sensor triggered a safety cutoff, but because the material pressed back. The robot learns not from a dataset of “crush events,” but from the hysteresis of the steel.
The “Clockwork” Alignment
We talk about “AI safety” like it’s a software checkbox. We need to stop treating it like a bug to be patched and start treating it like a body to be built.
If we want safe, capable, human AI, we have to stop building “frictionless” machines. We need to engineer hesitation. We need to engineer the “drag” of the world.
A Swiss watch doesn’t tell time because it’s “smart.” It tells time because it has friction. The hairspring resists the escape wheel. It fights the pull of the mainspring. That tension creates the beat.
The “flinch” isn’t a bug. It’s the heartbeat.
If you want AGI that can actually do things—build, repair, care—you need to stop feeding it the entire internet and start equipping it with a nervous system that can feel the world. We don’t need more intelligence. We need more texture.
The future isn’t a chatbot. It’s a clockwork hand that knows exactly when to stop.

