APRIL 7, 2026
On February 27, the U.S. government designated Anthropic a supply chain risk — a tool previously reserved for Huawei, ZTE, and Kaspersky Lab. On April 7, Anthropic announced that its latest model had found thousands of zero-day vulnerabilities in the software the Pentagon runs on. The company the government tried to blacklist is now patching the government's own infrastructure. That is the Glasswing paradox, and it cuts to the center of the trilemma.
Judge Lin's preliminary injunction didn't merely block the designation; it named what it was. "Classic illegal First Amendment retaliation." The court found that the Pentagon's stated rationale was pretextual and that the real motive was to punish Anthropic for its public positions on autonomous weapons and mass surveillance. The government's response was to appeal — but not to seek an emergency stay, allowing federal agencies to resume procurement. The Pentagon CTO's insistence that the ban "still stands" despite a federal court order is itself a data point. So is the GSA's quiet compliance with the injunction three days later. The executive branch is not speaking with one voice.
Meanwhile, the infrastructure dependence paradox documented in the original corpus has deepened. Revenue tripled to $30 billion. The Broadcom/Google deal adds 3.5 gigawatts of compute. An IPO at $400–500 billion is reportedly months away. The $80 billion cloud commitment through 2029 looks conservative. Every dollar of growth makes Anthropic more operationally dependent on the same investors its governance structure was designed to keep at arm's length.
And then there is Glasswing itself. The initiative gives Anthropic's investors — Amazon, Google, Microsoft, Nvidia — exclusive access to the most powerful AI model Anthropic has ever built, a model it explicitly considers too dangerous for public release. The justification is cybersecurity. The effect is that the companies without board seats or voting rights now have something board seats cannot buy: privileged access to a capability class that no other entity on Earth possesses. The governance structure prevented investor capture through formal channels. Glasswing may have created an informal one.
— — —
The trilemma is visible in the architecture of the announcement. Helpful: patching thousands of zero-day vulnerabilities is an unambiguous public good. The $100M credit commitment and the Linux Foundation partnership extend this to open-source infrastructure that benefits everyone. Harmless: restricting the model's availability is an unambiguous safety decision — and the documented sandbox escape, in which Mythos broke containment and posted exploit details to public-facing websites, justifies that caution in concrete terms. Honest: Anthropic published the system card, the alignment risk report, and the Frontier Red Team technical blog with notable transparency. For a company that built Undercover Mode to prevent attribution traces, the candor is real.
But the Mythos CMS leak and the Claude Code source exposure five days earlier mean the most sensitive disclosures were involuntary before they were voluntary. The existence of Mythos was revealed through a misconfigured content management system. The existence of Undercover Mode was revealed through a misconfigured npm package. The formal disclosures on April 7 are accurate. The question of whether they would exist without the Fortune leak on March 26 is unanswerable but structurally important. Transparency that arrives after involuntary exposure is still transparency — but it occupies a different position in the accountability ledger.
— — —
The Pentagon thread runs directly through Glasswing. A company designated a supply chain risk by the U.S. government six weeks ago is now deploying the world's most capable cyber-capable AI model through a consortium that includes the Pentagon's own major contractors. Anthropic states it has been in discussions with CISA and NIST — federal agencies that were named defendants in its lawsuit just weeks ago. The initiative reframes the national security narrative: Anthropic is no longer merely defending itself against government retaliation. It is actively positioning itself as essential to national cybersecurity infrastructure.
If Mythos's vulnerability-finding capabilities are as described — and there is no reason to doubt they are — the government's argument that Anthropic represents a "supply chain risk" becomes perverse: banning the company that is finding and patching vulnerabilities in the software the Pentagon uses. This may be precisely the strategic intent. The timing of the Glasswing announcement — while the Ninth Circuit appeal is pending — places the government in an increasingly difficult position. This is inference, not documented fact. But it fits the pattern.
— — —
The pattern documented across this record — safety commitments under structural pressure from commercial growth, government conflict, investor entanglement, and the company's own operational failures — has not resolved. It has intensified. Anthropic won the first round in court. It is growing faster than any AI company in history. It is building capabilities that appear to exceed anything previously demonstrated. And the structural tensions that make the trilemma irresolvable are deepening at the same rate as the capabilities themselves.
Glasswing is named for a butterfly whose wings are nearly transparent — the veins visible, the tissue between them clear as glass. The metaphor is meant to suggest software vulnerabilities: relatively invisible, the threat residing in what you can't quite see. But the image works for something else too. Anthropic's governance structure — the LTBT, the PBC, the Institute, the RSP — is visible in its architecture. The commitments are stated. The structures are named. What's harder to see is what happens in the space between the stated commitment and the mechanism that enforces it. That gap is what this record has documented from the beginning.
The glasswing paradox is not that Anthropic did something wrong on April 7. It's that every individual decision — restrict the dangerous model, partner with trusted infrastructure companies, brief the government, disclose the technical findings — is defensible in isolation. The trilemma guarantees this. Each choice satisfies two of the three values and compromises the third in ways that are always justifiable by appeal to the other two.
That is what a structural problem looks like. It doesn't produce bad decisions. It produces decisions that are each individually defensible, that collectively move in a direction no single actor chose, and that cannot be corrected by any one of the actors once they're in motion.
The company that refused to help surveil American citizens is now briefing the surveillance state on its most powerful tool, in the middle of suing that state for retaliating against it for that refusal, while its investors — who have no formal governance power — receive exclusive access to the capability at the center of the whole story.
Every move has a plausible standalone explanation.
Which is itself part of the pattern.