Anthropic's Claude Surges as OpenAI Faces Backlash Over Pentagon Deal
By admin | Mar 08, 2026 | 3 min read
Within a span of just over a week, discussions regarding the Pentagon's utilization of Anthropic's Claude technology collapsed. The Trump administration then classified Anthropic as a supply-chain risk, a designation the AI firm has stated it will challenge legally. Concurrently, OpenAI swiftly publicized its own agreement, triggering a significant backlash. This response included users deleting ChatGPT and propelling Anthropic's Claude application to the top of the App Store rankings. Furthermore, at least one OpenAI executive has resigned, citing concerns that the announcement was made hastily without adequate safeguards.
Sean observed that this scenario is unusual for several reasons, partly because OpenAI and Anthropic create products that dominate public conversation. More critically, the conflict centers on "how their technologies are being used or not being used to kill people," a subject that inherently attracts intense scrutiny. Kirsten added that this situation should "give any startup pause."
A preview of their edited conversation follows.
Kirsten questioned whether other startups, upon witnessing the recent federal government debate involving the Pentagon and Anthropic, might reconsider pursuing federal contracts. She pondered if this marks a shift in sentiment.
Sean shared this curiosity but expressed doubt about an immediate change. He noted that numerous companies, from startups to established Fortune 500 firms like General Motors, routinely engage in defense work with the Department of Defense, often without public attention. The unique challenge for OpenAI and Anthropic, he argued, is their high-profile status; they produce widely used products that are constantly discussed, placing them under a spotlight most government contractors avoid.
He added a crucial distinction: the intense scrutiny stems specifically from debates about the potential use of their AI in lethal operations. This adds a layer of gravity absent from discussions about more traditional defense contractors. Consequently, Sean doesn't anticipate companies like Applied Intuition, which position themselves as dual-use, will retreat, largely due to the lack of similar public focus and shared understanding of the implications.
Anthony reflected on the unique nature of this story, tied closely to these specific companies and personalities. While broader discussions about technology's role in government are valuable, this incident provides a curious case study. He pointed out that Anthropic and OpenAI aren't fundamentally opposed in principle; both publicly advocate for restrictions on AI use. The dispute seems more about Anthropic's firm stance against altering contractual terms, compounded by reported significant friction between the entities.
Sean acknowledged a notable interpersonal rivalry at play, which shouldn't be ignored.
Kirsten agreed but emphasized the serious implications beyond that dynamic. Summarizing the core issue, she explained the dispute between the Pentagon and Anthropic, where Anthropic appears to have lost ground, though its technology remains in use. OpenAI's subsequent involvement has evolved the situation, leading to notable backlash, including a reported 295% surge in ChatGPT uninstalls after its DoD deal was confirmed.
For Kirsten, the most critical and alarming aspect is the Pentagon's attempt to modify terms on an existing contract. This deviation from standard, lengthy government contracting processes is abnormal and represents a significant concern. She stressed that this political maneuvering within the DoD should give any startup serious reason to hesitate.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!