The Pentagon may allow limited continued use of Anthropic’s AI tools in rare national security cases even as the Defense Department moves ahead with a broader phaseout tied to the company’s supply-chain risk designation. An internal Pentagon memo dated March 6 and signed by Chief Information Officer Kirsten Davies says exemptions could still be granted beyond the six-month ramp-down period if officials determine the use is critical and supported by a risk-mitigation plan.
The memo suggests the Defense Department is trying to preserve flexibility for sensitive operations while still enforcing the larger directive to reduce reliance on Anthropic’s technology. According to the memo, units seeking continued access would need to justify the request and document how risks would be managed. The same guidance prioritizes the removal of Anthropic’s software from higher-stakes defense systems, including areas tied to nuclear and missile defense.
That leaves the Pentagon in a more complicated position than a simple all-or-nothing ban. On one hand, Defense Secretary Pete Hegseth’s order set in motion a broad effort to phase Anthropic out of federal defense use after the company was labeled a supply-chain risk. On the other, the Davies memo indicates officials recognize Anthropic’s tools may still be embedded deeply enough in some workflows, supply chains, or mission support environments that a complete and immediate cutoff is not always practical.
Anthropic has already challenged the government’s actions in court. The company filed lawsuits in San Francisco and Washington after the Pentagon’s designation, arguing that the move was unlawful and retaliatory. The legal fight grew out of a dispute over Anthropic’s refusal to remove restrictions barring use of its AI systems for fully autonomous weapons and mass domestic surveillance.
That dispute has put Anthropic CEO Dario Amodei at the center of one of the biggest clashes yet between an AI company and the U.S. national security establishment. Anthropic has said it supports certain defense and intelligence uses of AI, but has drawn a hard line around autonomous lethal decision-making and domestic surveillance. The Pentagon’s response, including the supply-chain risk label and phaseout order, has turned that policy disagreement into a test case over how much control AI developers can retain once their systems are used inside government operations.
The Davies memo adds a new layer to that conflict because it suggests the Defense Department may not be able, or may not want, to sever every Anthropic connection on the same timetable. Defense contractors are instructed to be notified within 30 days and to certify full compliance within 180 days, but the exemption language indicates that some uses could continue beyond that window under tightly defined circumstances.
That makes the current policy more nuanced than the original crackdown suggested. Hegseth remains at the center of the broader push to remove Anthropic from defense systems, while Davies’ memo introduces a path for narrow exceptions. Amodei, meanwhile, remains the executive most closely identified with the company’s effort to defend its safety limits in court and in public.
For the AI industry, the episode underscores how quickly national security work can shift from commercial opportunity to legal and policy confrontation. And for Anthropic, the prospect of limited exemptions does not resolve the larger fight. It only shows that even as the Pentagon moves to phase the company out, some parts of the defense system may still consider its tools difficult to replace.



