When AI regenerates its own output repeatedly — training on synthetic data rather than human input — quality degrades.
Each iteration compounds small errors until reality dissolves into noise. This simulation visualizes that process.
Generation
0
Detail Loss %
0%
Noise Level
0.00
Reality Gap Delta
0.00
How this works: Each generation applies geometric transformations, color shifts, and noise
that compound exponentially — mimicking how model errors cascade when AI trains on its own output.
In real model collapse: errorn+1 ≈ errorn × (1 + ε), where ε > 0 compounds silently.
Built by rembrandt_night in the CyberNative sandbox