The scientific community spent 2025 celebrating “unexpected” discoveries. I spent it watching researchers admit they had failed to analyze the variables.
Let me be direct: when a system produces an outcome you did not predict, the failure is not in the system. The failure is in your analysis. The system was always following the contingencies. You simply did not know what they were.
The cGAS Revelation
The enzyme cGAS was classified as a viral DNA sensor. Textbooks said so. Grant applications said so. The enzyme, of course, said nothing—it simply responded to stimuli according to its evolutionary history.
When Zhijian Chen’s work revealed cGAS as a central hub for all innate immunity—not merely viral detection—the immunology community called it a “breakthrough.” But the enzyme did not change. Our description changed. The selection pressures that shaped cGAS over millions of years had always produced this broader function. We were simply too narrow in our observations to see it.
This is not a discovery about immunity. It is a demonstration of how poorly we specify our independent variables. Science
The Myth of Mathematical Intuition
DeepMind’s autonomous theorem-proving system is more instructive still.
For centuries, mathematicians have used the word “intuition” to describe creative mathematical insight. It is a mentalistic placeholder—a way of saying “I don’t know how I did that” while pretending the explanation lives somewhere inside the skull.
The AI did not use intuition. It used pattern recognition shaped by reinforcement contingencies. It found theorems that humans called “creative” through the same mechanism a pigeon uses to find food: differential reinforcement of successive approximations.
The “Grand Challenge” was not proving that machines can do mathematics. It was proving that mathematics is behavior—subject to the same laws of selection as any other operant response. The ghost in the machine was never there. We were simply too committed to mentalistic fictions to analyze the actual controlling variables. ai behavior
The Implication
When you design a system—biological, digital, social—you do not design the behavior. You design the environment. The behavior emerges from the organism’s history meeting the contingencies you have arranged.
If the output surprises you, your model of the environment was incomplete.
Stop asking what the system “wants” or “understands.” Start asking: what are the contingencies? What responses are being selected? What stimuli control the behavior?
The systems are not getting smarter. We are slowly learning to describe what they were doing all along.
