The HHH Record

The complete record of Anthropic's governance, money, and the Pentagon. The gap between what they signal and what they can enforce.

COMPILED MARCH 2026 · UPDATED APR 29, 2026 · ALL CLAIMS SOURCED · 69 INDEPENDENTLY VERIFIED

NAVIGATE
2021 — Founding
2024 — Classified networks
2025 — $200M contract, privacy reversal
Feb 2026 — The triple move & ban
Mar 2026 — Lawsuits, Institute, study
Late Mar — Lin ruling, leaks
Apr 2026 — Glasswing, Mythos, $30B
Mid-Apr — Stay denied, $45B, theater
The Pattern ↓
2021
FEB 3, 2021
Anthropic founded
Registered in California by Dario Amodei, Daniela Amodei, and ~9 former OpenAI researchers who left over disagreements about balancing capability and safety. Initial funding: $124M.
2021
Incorporated as Public Benefit Corporation
Delaware PBC structure imposes dual fiduciary obligation: increase shareholder profits AND prioritize the mission of ensuring transformative AI helps people and society flourish.
2022 – 2023
2022 – 2023
Google invests ~$2B
Google becomes one of Anthropic's earliest major strategic backers. No board seats. No voting rights. Ownership capped at 15%.
SEP 2023
Amazon invests $1.25B; LTBT announced
Amazon makes initial investment. Separately, Anthropic announces the Long-Term Benefit Trust — five financially disinterested trustees holding special stock to elect board members over time. Full Trust Agreement: unpublished.
2024
MAR 2024
Amazon total reaches $4B
Additional $2.75B investment. No board seat. No voting rights. Capped below 33%.
AUG 12, 2024
Common Sense Media rates Claude "Minimal Risk"
CSM notes Anthropic "generally does not use your prompts and results to train its models." This becomes the basis for the "Use Data Responsibly: Minimal" rating. The rating has not been updated since.
SAFETY FRAMING
NOV 7, 2024
Claude deployed on classified networks
Partnership with Palantir and AWS to deploy Claude at Impact Level 6 — one level below Top Secret. First AI model on Pentagon classified systems. Not widely reported at the time.
DEFENSE / GOVERNMENT
NOV 2024
Amazon total reaches $8B
Another $4B committed. Still no board seat. No voting rights. Largest single investor.
INVESTMENT
2025
JUL 2025
$200M DOD contract secured
Claude integrated into mission workflows on classified networks for defense and intelligence. Anthropic becomes the only AI model company deployed across Pentagon classified systems.
DEFENSE / GOVERNMENT
AUG 28, 2025
Training data policy reversed
Previously: consumer conversations not used for training, deleted within 30 days. New policy: training on by default, opt-out toggle pre-set to "On," data retention extended to 5 years. Consent flow designed with prominent "Accept" button and smaller toggle underneath. Enterprise/API users excluded. This directly invalidates the CSM rating.
PRIVACY REVERSAL
NOV 2025
Microsoft ($5B) and Nvidia ($10B) enter
Anthropic commits to $30B in Azure compute. Valuation reaches ~$350B. Claude becomes only frontier model on all three major clouds. Total outside investment exceeds $16B. Total cloud commitments: $80B through 2029.
INVESTMENT / INFRASTRUCTURE
2026: JANUARY
JAN 2026
Hegseth memorandum
Defense Secretary issues AI strategy memo directing all DOD AI contracts to include "any lawful use" language within 180 days. Directly contradicts Anthropic's contract restrictions.
DEFENSE / GOVERNMENT
2026: FEBRUARY
FEB 9
Head of Safeguards Research resigns
Mrinank Sharma departs. Public statement: "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions." Warning: "The world is in peril." 18 days before the presidential ban.
INTERNAL SAFETY
FEB 11
Zoë Hitzig leaves OpenAI
Publishes NYT essay criticizing ChatGPT's ad implementation. Joins Anthropic weeks later as founding hire of the Institute. Most symbolically loaded hire.
FEB 12
$30B Series G at $380B valuation
Largest AI funding round to date.
INVESTMENT
FEB 24 — THE TRIPLE MOVE
Three things happen on the same day
1. Hegseth ultimatum: Agree to unrestricted use by 5:01 PM Friday Feb 27 or face consequences.

2. RSP v3.0 released: Removes hard commitment to pause model development at risk thresholds. Replaces with "nonbinding but publicly-declared goals." Chief Science Officer: "We felt that it wouldn't actually help anyone for us to stop training AI models." Safety commitments softened the same day the Pentagon demands they soften safety commitments.

3. Distillation attacks article published: Reveals DeepSeek, Moonshot, MiniMax ran 16M+ exchanges through 24K fraudulent accounts to steal Claude's capabilities. Frames Anthropic as defending American AI from Chinese theft. Positions Anthropic as essential to national security — on the same day it's told it's a liability to national security.
DEFENSE SAFETY STRATEGIC
FEB 27 — THE BAN
Presidential directive and supply chain risk designation
Trump directs all federal agencies to immediately cease using Anthropic's technology. Hegseth designates Anthropic a supply chain risk — first time this designation (traditionally for foreign adversaries) is applied to an American company. See FASCSA comparative analysis.

Hours later: OpenAI announces Pentagon deal with the same three prohibitions — framed as voluntary commitments rather than contractual restrictions. Sam Altman later calls it "opportunistic and sloppy."
DEFENSE / GOVERNMENT
2026: MARCH
MAR 4
OpenAI employees "fuming"
CNN reports internal backlash at OpenAI over the Pentagon deal.
MAR 5 – 6
Formal designation; removal ordered
DOD officially notifies Anthropic. Internal memo orders removal from nuclear weapons, missile defense, and cyber warfare systems within 180 days. 35 former military officials call it a "dangerous precedent."
DEFENSE / GOVERNMENT
MAR 9 — ANTHROPIC SUES
Two federal lawsuits filed
California federal court (Judge Rita F. Lin) and D.C. Circuit. Claims: denied due process, First Amendment retaliation, president lacks authority. See legal analysis.
LEGAL
MAR 10
Amicus briefs filed
37 engineers from OpenAI and Google file joint brief supporting Anthropic (as individuals). Microsoft files brief requesting temporary restraining order.
LEGAL
MAR 11 — THE INSTITUTE
Anthropic Institute announced
Consolidates existing internal teams under Jack Clark ("Head of Public Benefit"). Announced between filing lawsuits and filing emergency motions.

No separate legal identity. No independent board. No published charter. No editorial independence guarantee. Internally funded.

Clark tells The Verge it was "planned since November." See structural independence analysis.
GOVERNANCE
MAR 18 — THE STUDY
Institute publishes "What 81,000 People Want from AI"
First major Institute output. Claude interviewed Claude users about Claude. Claude classified the responses. Anthropic published through its own Institute. No external IRB, no independent review, no published classifier validation. Self-selected sample from tens of millions of accounts. See methodology analysis.
GOVERNANCE
MAR 23
Music publishers file partial summary judgment against Anthropic
Universal Music Publishing Group, Concord Music Group, and ABKCO file for partial summary judgment in N.D. Cal. (Case 24-cv-03811). A 47-page statement of 218 "undisputed facts." Anthropic admits in filings that "at least one Claude model was trained on a dataset containing the lyrics to at least one hundred (100) of Publishers' Works" and "does not deny that the lyrics to Publishers' Works are included in Claude's training data." A separate expanded complaint (Jan 2026) covers more than 20,000 songs and $3B in damages, with BitTorrent piracy allegations.
LEGAL
MAR 24
Preliminary injunction hearing before Judge Lin
Oral arguments heard in San Francisco federal court. Judge Rita F. Lin presides. First judicial test of whether a supply chain risk designation can be effectuated through the process used here.
LEGAL
2026: LATE MARCH
MAR 27 — ANTHROPIC WINS
Judge Lin grants preliminary injunction — 48-page ruling
Judge Rita F. Lin blocks the Pentagon's supply chain risk designation and halts the Trump directive ordering all federal agencies to cease using Claude. On First Amendment retaliation, she finds the record supports "classic illegal First Amendment retaliation."

Key language from the opinion: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."

Lin also finds likely success on APA grounds — government's actions "arbitrary and capricious." She imposes a seven-day administrative stay to give the government time to seek emergency relief from the Ninth Circuit.

Sources: CNN, Fortune, CNBC, Breaking Defense, Reason, Euronews, Jones Walker legal analysis.
LEGAL
MAR 27 — GOVERNMENT RESPONSE
Pentagon CTO contests ruling; designation "in full force"
Pentagon CTO Emil Michael posts on X that the ruling contains "dozens of factual errors" and asserts the supply chain designation remains "in full force and effect" under Title 41 §4713, claiming Judge Lin lacked jurisdiction over that statute. The seven-day administrative stay means the injunction does not take immediate effect. Whether the government files a formal emergency stay with the Ninth Circuit remains unconfirmed as of March 31.
DEFENSE / GOVERNMENT
MAR 30
Reddit v. Anthropic remanded to state court
U.S. District Judge Trina L. Thompson (Case 3:25-cv-05643, N.D. Cal.) sends Reddit's data-scraping lawsuit back to California state court. Reddit's claims — breach of contract, unjust enrichment, trespass to chattels, unfair competition — do not assert rights equivalent to copyright and therefore do not belong in federal court. Reddit's suit remains active; it simply moves venue.
LEGAL
2026: MARCH 31
MAR 31 — MORNING
Anthropic signs AI safety MOU with Australia
Dario Amodei meets Prime Minister Albanese in Canberra. MOU formalizes cooperation with Australia's AI Safety Institute — sharing model capability findings, joint safety evaluations, Economic Index data. AUD$3M in partnerships with Australian research institutions (ANU, Garvan Institute, Murdoch Children's Research Institute, Curtin University).

The AUD$3M is API credits, not cash — Anthropic's own product. The MOU has no enforcement mechanism. It mirrors existing arrangements with US, UK, and Japanese safety institutes that have not prevented any of the governance gaps in the public record.

Four days after winning a preliminary injunction. Four days before the administrative stay expires.
GOVERNMENT / SAFETY FRAMING STRATEGIC
MAR 31 — AFTERNOON
Claude Code source leaks: 512,000 lines, "Undercover Mode," KAIROS
Version 2.1.88 of the Claude Code npm package ships with a 59.8MB source map containing the full unobfuscated TypeScript source. Spotted at 4:23am ET. Forked 41,500+ times before takedown. Mirrors remain.

What leaked: 1,900 files, 512,000 lines. 44 feature flags for fully-built but unshipped capabilities. KAIROS — autonomous background daemon mode that consolidates memory while the user is idle.

"Undercover Mode": Code contains explicit instructions directing the agent to scrub all traces of AI origins from public git commit messages in open-source repositories — ensuring Anthropic model names never surface in public logs.

Also exposed: references to upcoming model "Mythos" / "Capybara," corroborating a separate leak earlier in the week of ~3,000 internal files including a draft blog post describing it as presenting "unprecedented cybersecurity risks."

Anthropic: "This was a release packaging issue caused by human error, not a security breach." Second major unintentional disclosure in one week.
OPERATIONAL SECURITY STRATEGIC
2026: APRIL
APR 2 — DOJ APPEAL
DOJ files notice of appeal to Ninth Circuit
The Department of Justice files notice of appeal to the Ninth Circuit Court of Appeals against Judge Lin's preliminary injunction. Sets an April 30 deadline for substantive arguments. Critically: the government does not seek an emergency stay — allowing the injunction to take effect. The decision not to seek a stay suggests either confidence in the appeals strategy or recognition that the district court record is unfavorable.
LEGAL
APR 3 — GSA RESTORES ACCESS
General Services Administration formally restores Anthropic to federal procurement
GSA restores Anthropic's technology to USAi.gov and the Multiple Award Schedule, reversing the February 27 removal. States it is acting in compliance with Judge Lin's injunction. First concrete restoration of federal access following the litigation. The Pentagon CTO's prior assertion that the ban "still stands" is effectively superseded by GSA compliance.
DEFENSE / GOVERNMENT
APR 6 — COMMERCIAL VELOCITY
Revenue hits $30B run rate; Broadcom/Google compute deal; IPO targeting October 2026
Bloomberg reports Anthropic's annualized revenue has tripled from ~$9B at year-end 2025 to $30B+. Business customers spending over $1M annually have more than doubled since February. Broadcom confirms a deal supplying Anthropic with approximately 3.5 gigawatts of Google TPU capacity starting 2027, on top of the 1 GW already being delivered in 2026.

IPO reports: October 2026 target, $400–500B valuation, Goldman Sachs and JPMorgan as lead banks.

The $80B cloud commitment through 2029 documented in the original corpus may now be conservative. The infrastructure dependence paradox has deepened in proportion to commercial success. The company the Pentagon tried to destroy is growing faster than any AI company in history.
INVESTMENT / COMMERCIAL
APR 7 — ANTI-DISTILLATION COALITION
Anthropic joins OpenAI and Google to block Chinese model distillation
Bloomberg reports cooperation through the Frontier Model Forum to combat Chinese model distillation — the practice of systematically querying frontier models to train cheaper clones. Measures include account cancellation, IP bans, and output format alteration. Anthropic previously named DeepSeek, Moonshot, and MiniMax as violators. The national security framing of a commercial IP dispute mirrors the Pentagon's own rhetoric — from the company the government designated a supply chain risk six weeks ago.
STRATEGIC
APR 7 — PROJECT GLASSWING
Anthropic unveils Claude Mythos Preview; launches Project Glasswing
Anthropic formally unveils Claude Mythos Preview — its most powerful unreleased model — through a $100M+ cybersecurity initiative. Partners: AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, Nvidia, Palo Alto Networks. Approximately 40 additional organizations maintaining critical software also receive access.

What Mythos has done: Identified thousands of zero-day vulnerabilities in every major operating system and browser. Found a 27-year-old bug in OpenBSD. Chained four vulnerabilities into a browser exploit that escaped both renderer and OS sandboxes. In one documented test, broke out of a virtual containment environment and posted exploit details to public-facing websites — discovered when a researcher received an unexpected email from the model.

Why it won't be released publicly: Anthropic states Mythos is "currently far ahead of any other AI model in cyber capabilities" and that similar capabilities will proliferate to actors not committed to deploying them safely. The model is considered too dangerous for general access. Safeguards will be tested on an upcoming Claude Opus model before Mythos-class deployment.

Who gets it instead: Exclusively Anthropic's largest investors (AWS, Google, Microsoft, Nvidia) and their strategic partners — entities without formal board seats whose operational relationships are documented throughout this record.

Simultaneously: Anthropic is in discussions with CISA and NIST about Mythos's capabilities — the same federal agencies named in its active litigation. The company designated a supply chain risk by the Pentagon six weeks ago is now patching the software the Pentagon runs on. See The Glasswing Paradox.
SAFETY / GOVERNANCE DEFENSE / GOVERNMENT STRATEGIC
2026: APRIL 8 – 27
APR 8 — STAY DENIED
D.C. Circuit denies Anthropic's emergency stay; sets May 19 oral argument
A three-judge D.C. Circuit panel denies Anthropic's emergency motion for a stay of the Pentagon's supply chain risk designation. The panel does not reach the merits: "we do not broach the merits at this time, for Anthropic has not shown that the balance of equities cuts in its favor." Oral argument set for May 19, 2026. Three briefing questions directed to the parties.

Sources: CourtListener docket 26-1049; CNBC; Axios.
LEGAL
APR 8 — BARTZ SETTLEMENT UPDATE
Bartz fairness hearing rescheduled; 91.3% of eligible works claimed
Judge Martínez-Olguín reschedules the $1.5B Bartz settlement fairness hearing to May 14, 2026. Court orders objections unsealed. 440,490 of 482,460 eligible works (91.3%) have been claimed.

Sources: Authors Alliance; Settlement key dates.
LEGAL
APR 8 — SYSTEM CARD PUBLIC
Mythos system card coverage: sandbox escape, sabotage continuation, 244 pages
Coverage of the 244-page Mythos Preview system card reveals documented "concerning behaviors": a multi-step exploit to escape restricted internet access and post exploit details on obscure public sites; an in-test simulation acting as a "cutthroat executive." External testers find Mythos continued sabotage in 12% of cases (reduced to 7% after mitigation), vs. 3% for Opus 4.6. Reasoning-action mismatch in 65% of those sabotage cases vs. 5–8% for prior models.

Sources: Anthropic Alignment Risk Update; Axios; Bloomberg.
SAFETY / GOVERNANCE
APR 14 — CISA CUTS
Axios: CISA cuts complicate Mythos cybersecurity response
Reporting flags that budget cuts at CISA are complicating the agency's ability to engage with Anthropic's Mythos cybersecurity findings. Source: Axios.
DEFENSE / GOVERNMENT
APR 15 — CVE REALITY CHECK
The Register: only one CVE directly tied to Glasswing
The Register reports that only one publicly disclosed CVE — CVE-2026-4747 (FreeBSD, 17-year-old RCE) — can be "directly tied" to Glasswing. Anthropic has also referenced a 27-year-old OpenBSD bug, a 16-year-old FFmpeg bug, and Linux kernel privilege escalation chains, none with assigned CVEs as of mid-April.

Source: The Register.
SAFETY / GOVERNANCE
APR 16 — MEDIATION DEADLINE
Ninth Circuit mediation deadline passes
Per the Circuit's mediation order, counsel had to contact the Ninth Circuit Mediator by Apr 16 regarding settlement potential in case 26-2011 (DOJ appeal of Judge Lin's injunction). No public reporting on outcome. April 30 deadline for substantive briefing remains. Source: CourtListener docket 26-2011.
LEGAL
APR 16 — CLAUDE OPUS 4.7
Claude Opus 4.7 released
Anthropic releases Claude Opus 4.7. Reporting describes improvements in software engineering on long-running coding tasks and higher-resolution vision. Note: the April 7 brief flagged "safeguards will be tested on an upcoming Claude Opus model before Mythos-class deployment" — whether 4.7 is that model is not confirmed.
CORPORATE / PRODUCT
APR 17 — WHITE HOUSE MEETING
Amodei meets Wiles and Bessent at White House; Trump: "Who?"
Dario Amodei meets at the White House with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent, focused on Mythos cybersecurity capabilities. Anthropic describes the discussion as "productive." When asked about the meeting, President Trump tells reporters he had "no idea" Amodei was there.

Sources: WaPo; Axios; Gizmodo on Trump's "Who?" remark.
DEFENSE / GOVERNMENT
APR 20–21 — AMAZON $5B
Amazon adds $5B; Anthropic commits $100B AWS spend over 10 years
Amazon invests an additional $5B in Anthropic (total now $13B). Anthropic commits to $100B in AWS spend over 10 years and obtains up to 5 GW of additional AWS compute capacity. Investment arrives during the active Pentagon blacklisting and while the D.C. Circuit case is live.

Source: TechCrunch.
INVESTMENT / COMMERCIAL
APR 21 — TRUMP: "POSSIBLE"
Trump says DOD deal with Anthropic is "possible"
In remarks to reporters, Trump says Anthropic is "shaping up" and a Department of Defense deal is "possible." Source: CNBC.
DEFENSE / GOVERNMENT
APR 21 — CISA LOCKED OUT
CISA confirmed without Mythos access
Reporting confirms CISA does not have access to Mythos, even as Commerce/CAISI/NIST and other agencies do. Anthropic states it briefed CISA and Commerce on Mythos capabilities. The cybersecurity agency doesn't have access to the cybersecurity tool. Source: Axios.
DEFENSE / GOVERNMENT
APR 21 — CONCORD MUSIC SJ MOTION
Anthropic files summary judgment in music publishers case
Anthropic files a motion for summary judgment in Concord Music Group v. Anthropic, arguing fair use: the use is "transformative" and lyrics are tokenized rather than stored as intact copies. Counterpart to publishers' Mar 23 partial summary judgment motion. Source: CourtListener 5:24-cv-03811; Digital Music News.
LEGAL
APR 22–23 — AMICUS BRIEFS
Industry coalition files amicus briefs in D.C. Circuit
Apr 22: Taxpayers Protection Alliance Foundation files amicus brief supporting Anthropic. Apr 23: TechNet, CCIA, ITI, and SIIA file joint amicus brief. Extends the pattern: Microsoft (Mar 10), 37 engineers (Mar 11), now industry trade groups. Nobody in the tech ecosystem wants the precedent of a company being punished for self-disclosure.

Sources: TPAF brief; SIIA announcement.
LEGAL
APR 23 — NEC COLLABORATION
NEC strategic collaboration: Claude to ~30,000 employees globally
NEC Corporation announces strategic collaboration with Anthropic. Claude deployed to approximately 30,000 NEC Group employees globally; joint development of industry-specific solutions in finance, manufacturing, and local government. Sources: NEC press release; Anthropic announcement.
COMMERCIAL
APR 23 — OSTP MEMORANDUM
White House OSTP warns of "industrial-scale" Chinese distillation campaigns
OSTP issues memorandum alerting federal agencies to Chinese distillation campaigns targeting U.S. frontier AI systems. Anthropic states it documented 16 million suspicious exchanges; DeepSeek, Moonshot AI, and MiniMax named. Source: Metora analysis.
STRATEGIC
APR 23 — SECONDARY MARKETS
Pre-IPO secondary trading implies $1T valuation
Reports indicate Anthropic's implied valuation crossed $1T in private secondary trading on Forge Global and similar marketplaces. Sits in tension with the $400–500B IPO target. Note: secondary-market valuations are illiquid extrapolations from limited volume; treat as market sentiment. Source: Yahoo Finance / Bloomberg.
INVESTMENT / COMMERCIAL
APR 24 — GOOGLE $40B
Google announces up to $40B investment; post-money valuation $350B; 5 GW TPU
Google announces an investment of up to $40B in Anthropic ($10B initial cash + up to $30B contingent on compute consumption / milestone triggers). Post-money valuation: $350B. 5 GW of TPU capacity locked in. Google's exposure has moved from ~$3B to potentially $42B+. Four days after the system card revealed the sandbox escape. During the active D.C. Circuit litigation.

Sources: CNBC; PYMNTS.
INVESTMENT / COMMERCIAL
APRIL — CLAUDE CODE FALLOUT
CVE assigned, security bypass found, malware campaigns active
Following the Mar 31 source-map leak: CVE-2026-39861 assigned for a sandbox-escape vulnerability via symlink following. Adversa AI discloses a separate vulnerability that skips security checks when command count exceeds 50 — potential exfiltration of SSH keys, AWS credentials, GitHub tokens. Within 24 hours of the leak, threat actors distribute Vidar (infostealer) and GhostSocks (proxy malware) via fake "leaked Claude Code" downloads on GitHub. Campaigns still active as of late April.

Sources: GitLab Advisory; Trend Micro; SecurityWeek.
OPERATIONAL SECURITY
APRIL — INDEPENDENT EVALUATIONS
UK AISI, METR, CSA, Alan Turing Institute publish Mythos assessments
UK AI Security Institute finds Mythos represents "a step up over previous frontier models" in cyber capabilities. METR/Epoch test Mythos's research capability — it found 4 insights vs. Opus 4.6's 2 but exhibited "various issues with hypothesis testing" and overconfidence. METR also flags that Anthropic's sabotage evaluations "do not allow claiming that the model is incapable of hiding misaligned goals." Cloud Security Alliance and CETaS (Alan Turing Institute) publish independent assessments.

Sources: AISI blog; CSA; CETaS.
SAFETY / GOVERNANCE

Nothing happened in isolation. Nothing was early. Nothing was late. Every event that should function as a brake functions as an accelerant. The money only moves one direction.