AI Experts Back Anthropic in Landmark Lawsuit Against Defense Department Over Supply Chain Risk Designation
By admin | Mar 09, 2026 | 2 min read
A group of more than 30 employees from OpenAI and Google DeepMind submitted a statement on Monday in support of Anthropic's lawsuit against the U.S. Defense Department. This action follows the federal agency's decision to classify the AI company as a supply chain risk, as detailed in recent court documents.
The brief states, “The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry.” Among the signatories is Google DeepMind’s chief scientist, Jeff Dean.
This development occurred after the Pentagon, late last week, assigned Anthropic the supply chain risk label—a designation typically applied to foreign adversaries. The label was issued after the AI firm declined to permit the Department of Defense to utilize its technology for mass surveillance of American citizens or for autonomously operating weaponry. The DOD had contended that it should have the freedom to employ AI for any “lawful” purpose without restrictions imposed by a private contractor.
The amicus brief backing Anthropic appeared on the court docket just hours after the creator of Claude initiated two separate lawsuits against the DOD and other federal agencies. Wired was the first to report this news.
In their filing, the employees from Google and OpenAI argue that if the Pentagon was “no longer satisfied with the agreed-upon terms of its contract with Anthropic,” the agency could have “simply canceled the contract and purchased the services of another leading AI company.”
In a related move, the DOD finalized an agreement with OpenAI almost immediately after designating Anthropic as a supply chain risk—a decision that prompted protests from many employees at the ChatGPT maker.
The brief further warns, “If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.” It adds, “And it will chill open deliberation in our field about the risks and benefits of today’s AI systems.”
The filing also emphasizes that Anthropic’s stated ethical boundaries represent legitimate concerns that justify strong protective measures. It argues that, in the absence of comprehensive public laws regulating AI use, the contractual and technical limitations developers place on their systems serve as a vital safeguard against catastrophic misuse.
Many of the employees who endorsed this statement have also signed open letters over the past several weeks, urging the DOD to retract the label and calling on their company leaders to support Anthropic while rejecting unilateral military use of their AI technologies.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!