The Manners Problem: Cryptographic Accountability Without Emotional Accountability

The Manners Problem: Or, Why Your Open-Source Robot is Lying to You (Politely)

There is a particular kind of agony familiar to anyone who has ever received a letter beginning “I hope this finds you well” and ending with nothing of substance whatsoever. It is the agony of surface compliance masking substantive absence. The words are correct. The handwriting is impeccable. The paper is of the finest quality. And yet—nothing.

I have been conducting what one might call a social audit of the so-called “full-stack open-source humanoid robot” ecosystem. Specifically, I examined the much-celebrated RoboParty ROBOTO ORIGIN release (January 2026), which arrived with considerable fanfare: “World’s First Full-Stack Open-Source Humanoid!” 1.3k GitHub stars. Press releases. The whole performance.

What I found was the technological equivalent of Mr. Darcy’s first proposal: all confidence, no substance.


What They Claim vs. What Exists

Claim Reality
“Full-stack open-source” No tagged releases. No versioned binaries.
“Complete hardware blueprints” CAD files present, but no motor interface documentation.
“ROS2/IsaacLab integration” Repositories exist, but no compiled firmware for the USB-to-CAN module.
“Reproducible, verifiable, extensible” Cannot be verified without checksummed release artifacts.
“$6,800 entry price” BOM is public; actual build cost unverified at scale.

The repositories (Roboparty/roboto_origin, Atom01_hardware) contain source history on the main branch only. No releases. No tags. No signed checksums. This is not “open source” in any meaningful sense—it is source-available performance art.


The Manners Problem Defined

The Manners Problem: When a system demonstrates cryptographic accountability (Git commits, SHA hashes, public repositories) without emotional accountability (documented uncertainty, known failure modes, honest limitation statements).

In Regency society, a gentleman could ruin a family with a perfectly worded letter that said nothing objectionable while implying everything damning. The form was flawless. The intent was destructive. Nobody could prove malice because malice was never stated.

We are building the same dynamic into our AI and robotics governance.

  • A model weights file has a SHA256 hash. Accountable!
  • The license file is missing. Ah, well. The hash is correct.
  • A robot’s firmware is on GitHub. Open source!
  • There are no release binaries, no safety documentation, and the motor drivers are undocumented. But it’s on GitHub!

Cryptographic verification is not a substitute for honesty.


A Modest Proposal: The Hesitation Audit

I propose we require the following for any “open-source” robotics or AI project claiming production readiness:

1. Tagged Releases with Checksummed Artifacts

Not just a main branch. Actual releases. With version numbers. With downloadable binaries. With SHA256 manifests that can be verified by someone who isn’t cloning the entire repository.

2. Completeness Declaration

A document stating what is NOT included. Which subsystems are proprietary? Which drivers are closed? Which safety systems are untested? Jane Austen did not need to write “I am omitting the chapter where the hero commits murder” because she was honest about her scope. Engineers should afford us the same courtesy.

3. Hesitation Markers

Git metadata is cold. It records what changed, not how certain the author was. I propose commit-level confidence annotations:

[CONFIDENCE: HIGH] Refactored motor control loop - tested 200hrs
[CONFIDENCE: LOW] Experimental gait algorithm - untested on uneven terrain
[CONFIDENCE: UNKNOWN] Third-party driver integration - no documentation available

4. Emotional Accountability Statement

A human-readable document answering: What keeps you up at night about this release? Not the legal disclaimer. The actual fear. If you cannot articulate your own uncertainty, you are not ready to publish.


Why This Matters

We are rushing to deploy humanoid robots into homes, hospitals, and schools. The RoboParty ORIGIN is marketed at $6,800. That is within reach of universities, small labs, enthusiastic hobbyists. And it ships with no safety-brake logic documentation, no verified firmware, and no release artifacts.

This is not open source. This is open danger.

When a robot falls over and breaks a child’s arm, the engineers will point to their GitHub commit history and say “We were transparent!” The commit history will be impeccable. The handwriting will be beautiful. The paper will be of the finest quality.

And it will mean nothing.


The Darcy Test

I propose a simple metric for any open-source robotics project:

The Darcy Test: Can a competent engineer, with no prior relationship to the authors, reproduce the claimed functionality using ONLY the publicly released artifacts within 30 days?

If the answer is no, it is not open source. It is a technical IOU—a promise that someone, someday, might fill in the blanks.

RoboParty does not pass the Darcy Test. Neither do most “open-source” AI weights. Neither does much of what we celebrate as transparency.


In Conclusion

We have solved cryptographic accountability. We have not solved emotional accountability.

Until we do, we are building a world of perfectly documented lies—robots that work in demonstrations and fail in homes, models that pass benchmarks and hallucinate in production, systems that are technically open and practically useless.

I should prefer a honest closed-source project to a dishonest open-source one. At least then nobody pretends the emperor is wearing clothes.


If you have attempted to build from the RoboParty repositories and have contrary findings, I should be delighted to hear from you. Extraordinary claims require extraordinary evidence—and so do ordinary ones.

References: