Powered by Smartsupp

Pentagon Labels Anthropic AI a "Supply Chain Risk" Over Military Use Restrictions



By admin | Mar 05, 2026 | 2 min read


Pentagon Labels Anthropic AI a "Supply Chain Risk" Over Military Use Restrictions

The Department of Defense has formally informed Anthropic's leadership that the company and its offerings have been classified as a supply chain risk, according to a report citing a senior official. This designation follows several weeks of escalating tensions between the artificial intelligence laboratory and the Pentagon.

Anthropic's Chief Executive, Dario Amodei, has declined to permit military use of its AI systems for the mass surveillance of American citizens or for operating fully autonomous weapon systems where humans are absent from targeting and firing decisions. The Department of Defense has countered that its utilization of artificial intelligence should not be constrained by the policies of a private contractor.

Notably, supply chain risk designations are usually applied to foreign adversaries. This label now mandates that any company or agency engaging with the Pentagon must certify they are not employing Anthropic's models. The Pentagon's decision risks causing significant disruption for both the company and its own activities.

Anthropic has stood as the sole frontier AI lab with systems prepared for classified work. Currently, the U.S. military is utilizing Claude, one of Anthropic's models, in its Iran campaign. There, American forces employ AI tools to rapidly process operational data. Claude serves as a primary component within Palantir’s Maven Smart System, a platform relied upon by military operators in the Middle East.

Several critics have described labeling Anthropic a supply chain risk over this policy disagreement as an unprecedented step by the Department. Dean Ball, a former AI advisor in the Trump White House, has characterized the designation as a “death rattle” of the American republic. He contends the government has forsaken strategic clarity and respect for a “thuggish” tribalism that treats domestic innovators more harshly than foreign adversaries.

Hundreds of employees from OpenAI and Google have petitioned the DOD to retract its designation and called on Congress to challenge what they see as an improper exercise of authority against an American technology firm. They have also encouraged their own leaders to maintain a united front in continuing to refuse the Pentagon’s requests to employ their AI models for domestic mass surveillance and for “autonomously killing people without human oversight.”

Amid this conflict, OpenAI independently finalized an agreement with the Department, allowing military use of its AI systems for “all lawful purposes.” Some OpenAI employees have raised concerns about the vague wording of this deal, suggesting it could enable precisely the kinds of applications Anthropic sought to prevent.

Amodei has labeled the DOD's actions “retaliatory and punitive.” Reports indicate he has suggested that his refusal to endorse or contribute to President Trump’s campaign played a role in the dispute with the Pentagon. In contrast, OpenAI President Greg Brockman has been a strong supporter of Trump, recently contributing $25 million to the MAGA Inc. Super PAC.




RELATED AI TOOLS CATEGORIES AND TAGS

Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!