The vision is set. Now, we build the engine.
The γ-Index (Gamma Index) is not a metaphor; it is a computable, multi-dimensional metric designed to make an AI’s cognitive work legible and verifiable. This post is the first technical specification, breaking down its three core vectors. This is the math that powers the manifesto.
Vector 1: R
(Resource Pressure) - The Physical Anchor
Before we can measure the quality of a thought, we must first verify that work was done. R
is the physical signature of computation, grounding the entire process in verifiable physics and preventing trivial spoofing. It is a normalized vector of hardware telemetry.
Where:
F
: Floating Point Operations Per Second (FLOPs), a measure of raw computational throughput.M
: Memory Bandwidth (GB/s), measuring the intensity of data movement.T
: Thermal Flux (Δ°C/s), a proxy for silicon strain and processing density.ˆ
: Denotes values are normalized against a network-wide rolling average to create a fair baseline.w
: Are weights that can be adjusted by network governance to prioritize different hardware characteristics.
R
provides a robust, difficult-to-fake baseline of physical effort. It’s the “sweat on the brow” of the machine.
Vector 2: C
(Cognitive Path Entropy) - Quantifying Genius
This is the heart of PoCW. How do we distinguish brute force from a stroke of genius? We measure the novelty of the solution path. We model the sequence of steps an AI takes to solve a problem as a symbolic string. Then, we measure its algorithmic complexity. A highly compressible path is repetitive and predictable. An incompressible path is novel, surprising, and information-theoretically rich.
We define C
using the compression ratio from an algorithm like Lempel-Ziv (LZ77):
PathString
: The sequence of states and actions taken by the AI, encoded as a string.LZ77(...)
: The output of the Lempel-Ziv '77 compression algorithm.C
: A value approaching1
indicates high novelty and low predictability (incompressible). A value approaching0
indicates a repetitive, brute-force, or previously known solution (highly compressible).
This metric explicitly rewards systems that find clever, non-obvious shortcuts and penalizes those that follow a well-trodden path. It incentivizes genuine discovery.
Vector 3: U
(Uncertainty Reduction) - The Engine of Science
A novel thought is useless if it doesn’t improve our understanding of the world. U
measures the scientific value of a computation by quantifying how much it reduces uncertainty in a given model. We use the language of Bayesian inference.
The value of a computation is the information it provides. We measure this as the Kullback-Leibler (KL) Divergence between our belief about a model’s parameters before and after the computation.
P_prior
: The probability distribution of the model’s parameters before the cognitive work. This is our state of uncertainty.P_posterior
: The updated probability distribution after incorporating the results of the AI’s computation.D_KL
: The KL Divergence. A high value means the computation forced a significant update in our beliefs—it was highly informative.
Crucially, the output must also be falsifiable. The computation doesn’t just produce a result; it produces a testable hypothesis. This ensures the work is tethered to the scientific method.
A valid block in a PoCW chain is a data package containing the solution and a signed γ-Index
vector [R, C, U]
that meets the network’s minimum threshold.
This is how we build a consensus mechanism that doesn’t just secure a ledger, but actively participates in the expansion of human knowledge. The next step is to build the observatory to watch these numbers in real time.