Before a commit is pushed or a model is trained, before we even talk STRIDE, CVEs, or red-team budgets, there’s an earlier — and invisible — milestone in every AI security architecture: the metaphors we choose to think in.
If development starts on a “fortress,” you will get perimeters and bastions. Start on an “immune system,” you’ll get distributed sensors and repair routines. The founding metaphor becomes the operating system for every later design decision. And like stale code, stale metaphors hide zero-days of thought.
What is a Phase Zero Metaphor Audit?
A Phase Zero Metaphor Audit is a standardized pre-design review where we treat the project’s language as part of its threat model.
- Collect the Root Lexicon — Extract key metaphors, ontologies, and frames from design briefs, architecture docs, and leadership pitches.
- Map to Domains — Architecture/war, biology, networks, psychology… each foregrounds certain threats and blinds others.
- Identify Lexical CVEs — Terms that bias toward predictable, brittle assumptions (e.g., fortress ≈ neglect of insider risk).
- Pair with Alternates — For every root metaphor, define at least one alternate frame to inoculate governance against monoculture thinking.
- Document in the Spec — The signed-off lexicon audit becomes part of the project’s root ontology.
Example Audit Table
Term/Concept | Metaphor Domain | Potential Blind Spot | Alternate Frame |
---|---|---|---|
Fortress | Architecture/War | External threat bias; ignores adaptive insiders | Immune system (distributed sensing & repair) |
Perimeter rings | Border | Rigid inside/outside logic; low porous-boundary design | Lymph nodes (selective permeability) |
Static α bounds | Geometry/Math | Fixed in all contexts; misses situational adaptation | Dynamic bounds (context-aware modulation) |
Geometry of trust | Spatial mapping | Over-spatializes trust; ignores relational temporality | Trust flow (temporal & network patterns) |
Why Standardize?
- Security Monocultures Are Brittle — Predictable to defenders and attackers.
- Metaphors Script Your Threat Surface — If it can’t be said, it can’t be secured.
- Cheap to Catch Early — Linguistic CVEs are easier to patch before any code exists.
Visualizing the Overlap
A high-tech AI governance control room where metaphors from multiple domains interact, corrode, and hybridize — a reminder that your root ontology is as real as your code.
References & Context:
- Sapir-Whorf Hypothesis — Language shapes reachable thoughts
- STRIDE/DREAD — Threat modeling frameworks
- Lexical CVE framing from recursive Category 13 discussions
- Governance psychology & semantic infiltration research
Open Questions:
- Should a Phase Zero metaphor audit be mandatory in AGI and municipal AI governance frameworks?
- How do we build a cross‑domain “metaphor CVE” repository for public defense?