The HHH Record is an independent research project documenting the structural gap between what AI companies signal and what their governance structures can enforce.
The project began as an investigation into Anthropic's Helpful, Harmless, Honest framework — specifically, the observation that these three values form a trilemma that cannot be simultaneously maximized. That structural analysis expanded into a comprehensive documentation of Anthropic's corporate governance, financial entanglements, Department of Defense relationship, and the 2026 legal conflict.
Every factual claim in the master document has been independently verified against primary sources. The verification audit extracted 69 discrete claims and checked each against court filings, government documents, official company announcements, and reporting from multiple independent outlets. Where evidence is absent, the gap is documented rather than filled with speculation.
This is not a hit piece. Anthropic employs credible researchers, publishes AI safety research, and has created governance structures more elaborate than most AI companies. The individual people involved — from the founding team to the Institute hires — bring genuine expertise and, in some cases, demonstrated willingness to leave organizations over principle.
This is a structural critique. The question is not whether Anthropic wants to be accountable. The question is whether any mechanism exists that can hold them to their stated commitments when those commitments conflict with business interests. The record suggests: the training data policy reversed without structural resistance. The RSP softened without structural resistance. The Institute launched without structural independence. The rating persists without structural update.
The timeline makes the argument. The documents are the evidence. The reader decides.
Research was compiled using open-source intelligence, primary source verification, and AI-assisted research tools. All direct quotes are sourced and marked. Inferences are explicitly labeled. Contradictions are preserved rather than resolved. Where evidence is absent, the gap is documented.
The project uses timestamped public documentation as a deliberate strategy. All research documents are dated and version-controlled.
Leslie Sarah Warren is an independent writer and researcher. Her work focuses on AI governance, cognitive architecture, and educational framework design. She publishes at From Whom It May Concern on Substack.
Contact: LinkedIn
This research is published for public use. Cite the source. Link back. If you're a journalist, researcher, or policy professional working on AI governance, you're welcome to use this material with attribution.