Anthropic has sued the U.S. government after the Pentagon designated the company a supply-chain risk and federal agencies began cutting off use of its AI tools, escalating a dispute over whether the military can demand access to commercial AI models without the safety limits their developers impose. Anthropic says the action is unlawful and retaliatory, while the administration has framed the move as a national security measure.
The conflict centers on Anthropic’s refusal to remove two core restrictions from its Claude models: use in fully autonomous weapons and use for mass domestic surveillance. In a statement published March 5, CEO Dario Amodei said Anthropic supports national security work and has already provided Claude for intelligence analysis, modeling and simulation, operational planning, cyber operations, and other defense uses. But he said the company has never accepted those two categories of use and does not believe private firms should be involved in operational decision-making.
The Pentagon took a far more aggressive step after that disagreement. According to Anthropic’s court filings, Defense Secretary Pete Hegseth met with Amodei on February 24 and demanded compliance within four days. After Anthropic refused, the administration moved to block use of Claude across the federal government, and Hegseth later informed Anthropic that the company had been designated a supply-chain risk to national security. Anthropic says that label has already led agencies including the General Services Administration, Treasury, and State to cut ties.
Anthropic argues the designation is both legally unsupported and unusually severe. The company says the relevant authority is meant to protect government systems from actual supply-chain threats, not punish an American supplier in the middle of a policy dispute. The Associated Press reported that this is the first known time the federal government has used that designation against a U.S. company. Anthropic’s lawsuit raises claims under the Administrative Procedure Act, the First Amendment, and the Fifth Amendment, among others.
The White House has answered in sharply political terms. Liz Huston, speaking for the administration, said the military would operate under the Constitution rather than “any woke AI company’s terms of service,” and said the government would not allow a private company to dictate how the armed forces function. That response has made the case about more than one contract fight. It now sits at the intersection of AI safety, executive power, defense procurement, and the limits of corporate control over military use of commercial technology.
The dispute also has a larger Silicon Valley and defense-tech context. Weeks before the lawsuit, Pentagon Chief Technology Officer Emil Michael publicly said the military was pushing top AI firms, including Anthropic and OpenAI, to make their tools available on classified networks with fewer standard restrictions. Reuters reported in February that Michael said the Pentagon wanted frontier AI deployed across all classification levels, while officials argued the military should be free to use commercial AI tools so long as those uses comply with U.S. law.
That background helps explain why Anthropic has become a flashpoint. The company says Claude has already been used in classified settings through third parties and that it remains committed to supporting national security missions. But Anthropic also says it does not have confidence that its products could be used reliably or safely for lethal autonomous warfare, and its court filings say the administration’s actions are jeopardizing contracts, harming its reputation, and threatening the economic value of one of the fastest-growing AI companies in the world.
The Pentagon’s hard line could also reshape the competitive landscape among major AI vendors. AP reported that the dispute has pushed the Defense Department to look toward alternatives including Google’s Gemini, OpenAI’s ChatGPT, and Elon Musk’s Grok for some of the work previously done with Claude. That makes the case not just a policy fight but a consequential commercial battle involving government access, model safeguards, and which AI firms will be trusted inside the national security apparatus.
With Dario Amodei challenging the government in court, Pete Hegseth at the center of the Pentagon’s crackdown, Liz Huston defending the administration’s position, and Emil Michael representing the Defense Department’s broader push for fewer AI restrictions, the case has quickly become one of the clearest tests yet of who gets to set the rules for military use of commercial AI. However the lawsuit unfolds, it is likely to shape how AI companies, defense agencies, and Washington policymakers negotiate those boundaries from here.
Read our coverage of the growing age-checking market for another look at how regulation is reshaping the technology sector’s rules, risks, and responsibilities.



