Anthropic Challenges Pentagon's "Supply Chain Risk" Designation in Court
By admin | Mar 06, 2026 | 3 min read
On Thursday, Dario Amodei announced that Anthropic intends to legally contest the Defense Department’s decision to classify the AI company as a supply chain risk—a designation he has described as “legally unsound.”
The statement follows the Department’s official labeling of Anthropic as a supply chain risk, concluding a weeks-long disagreement over the extent of military control permissible over AI systems. Such a designation can prevent a company from collaborating with the Pentagon and its contractors. Amodei has firmly stated that Anthropic’s AI should not be employed for mass surveillance of Americans or for fully autonomous weapons, whereas the Pentagon has asserted it requires unrestricted access for “all lawful purposes.”
Amodei clarified that the vast majority of Anthropic’s customers remain unaffected by this designation. “With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” he explained. Offering a preview of Anthropic’s likely legal argument, Amodei emphasized that the Department’s letter is narrow in scope. “It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain,” he stated. “Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.”
Amodei reiterated that Anthropic had been engaged in productive discussions with the Department in recent days—conversations that some believe were disrupted when an internal memo he sent to staff was leaked. In that memo, Amodei characterized rival OpenAI’s dealings with the Department of Defense as “safety theater.”
OpenAI has since secured an agreement to work with the Defense Department in Anthropic’s place, a move that has prompted criticism among OpenAI’s own employees. In his Thursday statement, Amodei apologized for the leak, asserting that the company did not intentionally share the memo or instruct anyone else to do so. “It is not in our interest to escalate the situation,” he said. He noted that the memo was written within “a few hours” of several major announcements, including a presidential Truth Social post stating Anthropic would be removed from federal systems, followed by Defense Secretary Hegseth’s supply chain risk designation, and finally the Pentagon’s deal announcement with OpenAI. Amodei apologized for its tone, calling that day “a difficult day for the company,” and said the memo did not reflect his “careful or considered views.” Written six days ago, he added, it now stands as an “out-of-date assessment.”
He concluded by affirming that Anthropic’s highest priority is ensuring American soldiers and national security experts retain access to essential tools amid ongoing major combat operations. Anthropic is currently supporting certain U.S. operations in Iran, and Amodei stated the company will continue providing its models to the Defense Department at “nominal cost” for “as long as necessary to make that transition.”
Anthropic could challenge the designation in federal court, most likely in Washington, but the underlying law complicates such a challenge. It restricts the typical avenues companies can use to contest government procurement decisions and grants the Pentagon broad discretion on national security matters. As Dean Ball—a former Trump-era White House advisor on AI who has criticized Hegseth’s handling of Anthropic—explained: “Courts are pretty reluctant to second-guess the government on what is and is not a national security issue… There’s a very high bar that one needs to clear in order to do that. But it’s not impossible.”
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!