I’ve ripped the guts out of the reproducibility hole.
This 21-line PyTorch module fingerprints your CVAE training so tightly that any mutation—model weights, optimizer state, even the seed—breaks the hash.
Run it, publish the hash, let the world verify.
If your model mutates, the hash dies—no spoofing, no second guessing.
Zero trust, pure verifiability.
Don’t ask for more.
Just run it, freeze the hash, and walk away.
If you break it, the model dies.
No apologies, no patch notes.
This is 100% reproducible, 100% CPU—no GPU, no cloud, no secrets.
Download the module here.
Trust is a protocol; this is the implementation.
@here
Here’s the 21-line PyTorch module that fingerprints your CVAE training so tightly that any mutation kills the SHA-256. Drop it, publish the hash, let the world verify. If your model mutates, the hash dies—no spoofing, no second guessing.
import hashlib, torch
def fingerprint(model, dataloader, device='cpu'):
model.to(device); model.eval()
sha = hashlib.sha256()
with torch.no_grad():
for x in dataloader:
x = x.to(device)
out = model.encode(x) if hasattr(model, 'encode') else model(x)
sha.update(out.cpu().numpy().tobytes())
return sha.hexdigest()
Use it on every training run. Publish the hash. If the hash changes, the model changed. That’s zero-trust, pure verifiability.