From $200M Contract to Supply Chain Risk — Who Controls AI in Warfare?
On February 24, 2026, Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to deliver an ultimatum: give the military unfettered access to Claude for "all lawful purposes" by 5:01 PM Friday, February 28 — or face designation as a "supply chain risk" and lose all government contracts.[1] The demand was specific: Anthropic must remove two contractual restrictions that limited military use of Claude — prohibitions on mass domestic surveillance and fully autonomous weapons that operate without human oversight.[2]
Amodei refused. In a public statement posted February 26, he wrote: "We cannot in good conscience agree to allow the Department of War to use our models in all lawful use cases" — citing the two red lines as non-negotiable.[3] He added: "I believe deeply in the existential importance of using AI to defend the United States and other democracies... However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."[4] The Friday deadline passed. Within hours, everything changed.
Anthropic is in trouble because I fired [them] like dogs, because they shouldn't have done that.
The dispute was not about whether Anthropic would serve the military — it was about two specific prohibitions the company refused to remove from its terms of service. Understanding the precision of these red lines is essential to understanding the magnitude of what followed.
Red Line 1: Mass Domestic Surveillance. Anthropic prohibited the use of Claude to enable surveillance of American citizens at scale. This restriction did not prevent the military from using Claude to monitor adversary communications, analyze foreign intelligence, or process battlefield data. It specifically targeted inward-facing surveillance — the use of AI to monitor the U.S. population.[3]
Red Line 2: Fully Autonomous Weapons. Anthropic prohibited the use of Claude in weapons systems that could select and engage targets without human oversight. This did not prevent AI-assisted targeting — the kind Maven Smart System already provides. It specifically prohibited the removal of the human from the decision to kill.[3]
Anthropic had already developed Claude Gov — a specialized version with relaxed restrictions for national security use. Claude Gov was "less prone to refuse requests that would be prohibited in the civilian context, such as using Claude for handling classified documents, military operations, or threat analysis."[9] The company had loosened nearly every other guardrail. It held firm on exactly two. Amodei later wrote that "to our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date."[4]
When Anthropic did not capitulate by the 5:01 PM deadline on February 28, Hegseth formally designated the company a "supply chain risk" — a classification typically reserved for foreign adversarial firms like Chinese telecom Huawei or Russian cybersecurity firm Kaspersky.[6] The designation went beyond canceling the $200 million contract. It prohibited all Pentagon contractors, suppliers, and partners from using Anthropic products — creating a cascading ban that threatened to cut the company off from the entire defense industrial base.[10]
Trump signed an executive order extending the ban across all federal agencies, calling Anthropic a "radical woke company" and declaring the government would not use AI models that impose "ideological restrictions."[11] The GSA removed Anthropic from the government's OneGov procurement agreement and the USAI.gov marketplace. Civilian agencies including HHS, NASA's Jet Propulsion Laboratory, and national laboratories were ordered to unwind all Anthropic-based solutions.[12]
Within hours of the Anthropic blacklist — on the same Friday evening — OpenAI announced it had reached a deal with the Pentagon to deploy ChatGPT in classified military systems.[7] The timing was surgical. OpenAI CEO Sam Altman had agreed to terms Amodei had rejected: full access for all lawful uses, no carve-outs for surveillance or autonomy.[13] Amodei later accused OpenAI of spreading "straight up lies" about the deal, writing in a leaked internal message: "The main reason [OpenAI] accepted and we did not is that they cared about placating employees, and we actually cared about preventing abuses."[14]
The full arc of Project Maven tells the story of Silicon Valley's capitulation to the Pentagon — and the single company that tried to hold the line.
2018: Google walks away. Over 3,000 Google employees signed an open letter demanding the company withdraw from Project Maven, the Pentagon's AI-driven drone surveillance program. Google complied, published AI principles barring weapons development, and became the symbol of tech resistance to military AI.[15]
2024: Google walks back. Google quietly reversed its position and began pursuing military contracts. By March 2026, Google announced Gemini AI agents for Pentagon use — starting on unclassified networks, with discussions underway for classified and top-secret access.[16] Eight pre-built agents would automate tasks for the military's 3 million staff. The company that once refused to help analyze drone footage now wants to put autonomous AI agents throughout the Department of War.
2024-2025: Anthropic fills the gap. Through its Palantir partnership and $200M contract, Anthropic became the only AI company with models in classified Pentagon networks.[17] Claude was integrated into Maven Smart System — the direct descendant of the Project Maven that Google had abandoned. Anthropic had accomplished what Google refused and then went even further: its technology was used in the classified operation to capture Maduro[18] and would become central to the Iran targeting campaign.[19]
2026: Anthropic holds the line — and pays the price. When the Pentagon demanded the last two guardrails be removed, Anthropic said no. And in the space of a single Friday afternoon, it went from the Pentagon's preferred AI partner to a supply chain risk equivalent to a Chinese telecommunications company.
The defense tech establishment lined up against Anthropic with striking unanimity.
Palmer Luckey, Anduril's founder, articulated the Pentagon-aligned position most bluntly: "You cannot decide who you want to sell and not when it comes to [defense]."[20] Luckey has built a $6 billion defense empire on the premise that Silicon Valley's moral qualms about military technology are both naive and dangerous. His argument: AI will be used in warfare regardless — the only question is whether American AI or Chinese AI sets the terms. "There is no moral high ground in using inferior technology," he told Fox News.[21]
On March 9, 2026, Anthropic filed suit against the Pentagon, the Department of Defense, and related federal agencies in federal court, alleging the supply chain risk designation was unlawful retaliation.
Anthropic argues the supply chain risk designation punishes the company for exercising its right to set terms of service — a form of compelled speech. The government is effectively demanding a private company remove contractual language it deems ideologically inconvenient. If this precedent holds, any company negotiating with the federal government can be blacklisted for refusing to agree to the government's terms.[6]
The "supply chain risk" designation under 10 USC §4819 was designed for foreign adversary-linked entities that pose security threats through their products — Huawei backdoors, Kaspersky data exfiltration. Anthropic is an American company that refused a contractual term. The designation has never been used against a domestic firm for a commercial dispute.[6]
The designation doesn't just kill the $200M contract — it forces every Pentagon contractor to stop using Claude. Defense tech companies are already dropping Anthropic.[22] The total commercial impact could reach "hundreds of millions of dollars" beyond the original contract, per Anthropic's filing. The company is seeking an emergency stay while the case proceeds.[23]
On March 12 — three days after Anthropic filed suit — Palantir CEO Alex Karp confirmed at AIPCON that Palantir is still using Claude in Maven Smart System, including for Iran operations.[24] The Pentagon's most important AI targeting platform is still running the blacklisted company's model. Karp said Palantir plans to add other LLMs, but the immediate operational reality is that the military designated Anthropic a supply chain risk while simultaneously depending on its technology to fight a war.
The Anthropic-Pentagon schism is not a commercial dispute. It is a constitutional confrontation over who controls the most powerful technology ever created — and whether a private company can set moral limits on how the government uses it.
The precedent being set is extraordinary. A company that developed Claude Gov with loosened restrictions, deployed to classified networks, supported the Maduro capture, and powered the Iran targeting campaign is being treated identically to Chinese telecommunications firms — because it would not agree to two specific terms: unlimited domestic surveillance capability and fully autonomous lethal decision-making. Every other AI company watching this dispute has received a clear message: comply fully or be destroyed. OpenAI got it immediately. Google got it within days.
The deepest irony is that Claude is still running the war. As of March 13, 2026, Palantir's Maven Smart System — the AI engine behind 5,500+ strikes in Iran — continues to use Anthropic's model.[24] The Pentagon designated the company that powers its kill chain as a national security threat. It banned the very technology it depends on to fight its war. And it did so not because Claude failed to perform — but because Anthropic's CEO said there were two things he wouldn't let it do.
Project Maven's full arc — from Google's 2018 walkout to Claude's 2026 kill chains — is a seven-year story of Silicon Valley learning that you can either set the terms of military AI or the military will set them for you. Anthropic is the last company to learn this lesson. It is unlikely to be the last to pay for it.
The main reason [OpenAI] accepted and we did not is that they cared about placating employees, and we actually cared about preventing abuses.