Powered by Smartsupp

Anthropic Counters Pentagon's "National Security Risk" Claims in Court Filing



By admin | Mar 21, 2026 | 3 min read


Anthropic Counters Pentagon's "National Security Risk" Claims in Court Filing

Late Friday afternoon, Anthropic presented two sworn declarations to a federal court in California, challenging the Pentagon's characterization of the AI firm as an "unacceptable risk to national security." The company contends the government's case is built on technical misconceptions and allegations that were never discussed during the extensive negotiations prior to the conflict.

EMBED_PLACEHOLDER_0

These declarations were submitted with Anthropic’s reply brief in its lawsuit against the Department of Defense, filed ahead of a hearing scheduled for Tuesday, March 24, before Judge Rita Lin in San Francisco. The disagreement originated in late February when President Trump and Defense Secretary Pete Hegseth announced publicly they were severing ties with Anthropic after the company declined to permit unrestricted military application of its AI systems.

The declarations were provided by Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the company’s Head of Public Sector. Heck previously served as a National Security Council official in the Obama White House, later working at Stripe before joining Anthropic to oversee government relations and policy. She attended the February 24 meeting where CEO Dario Amodei met with Defense Secretary Hegseth and the Pentagon’s Under Secretary, Emil Michael.

In her statement, Heck identifies what she calls a central inaccuracy in the government’s filings: the assertion that Anthropic sought any form of approval authority over military operations. She states this claim is false, writing, “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role.” She further notes that the Pentagon’s worry about Anthropic potentially disabling or modifying its technology during an operation was never brought up in discussions, appearing only in later court documents without giving Anthropic a chance to address it.

A notable detail in Heck’s declaration reveals that on March 4—the day after the Pentagon formally finalized its supply-chain risk designation against Anthropic—Under Secretary Michael emailed Amodei to state the two sides were “very close” on the very issues the government now cites as evidence of a national security threat: Anthropic’s positions on autonomous weapons and mass surveillance of Americans. Heck includes this email as an exhibit.

This communication contrasts with subsequent public statements. On March 5, Amodei published a note about “productive conversations” with the Pentagon. The following day, Michael posted on X that “there is no active Department of War negotiation with Anthropic.” A week later, he told CNBC there was “no chance” of resumed talks. Heck’s implication is clear: if Anthropic’s stance on those two issues constitutes a security threat, why was a senior Pentagon official suggesting alignment on them immediately after the designation was made?

EMBED_PLACEHOLDER_1

Ramasamy contributes a different expertise to the case. Prior to joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government clients, including in classified settings. At Anthropic, he built the team that introduced its Claude models into national security and defense applications, including a $200 million Pentagon contract announced last summer.

His declaration addresses the government’s theoretical concern that Anthropic could disrupt military operations by disabling or altering its technology. Ramasamy asserts this is technically impossible. Once Claude is deployed within a government-secured, “air-gapped” system managed by a third-party contractor, Anthropic has no access to it; there is no remote kill switch, backdoor, or method to push unauthorized updates. He explains that any modification would require the Pentagon’s explicit approval and action to install, making an “operational veto” a fiction. Anthropic cannot view what government users input into the system, nor extract that data.

Ramasamy also challenges the claim that Anthropic’s employment of foreign nationals poses a security risk. He notes that relevant employees have undergone U.S. government security clearance vetting—the same process required for access to classified information. He adds that, to his knowledge, Anthropic is the only AI company where cleared personnel directly built the AI models intended for classified environments.

Anthropic’s lawsuit argues that the supply-chain risk designation—the first ever applied to an American company—constitutes government retaliation for the firm’s public views on AI safety, violating the First Amendment. In a 40-page filing earlier this week, the government wholly rejected this framing, stating that Anthropic’s refusal to allow all lawful military uses of its technology was a business decision, not protected speech, and that the designation was a straightforward national security determination, not punishment for the company’s views.




RELATED AI TOOLS CATEGORIES AND TAGS

Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!