Trump Administration Cuts Ties With Anthropic AI Over National Security Concerns
By admin | Mar 01, 2026 | 8 min read
As this conversation began on Friday afternoon, a breaking news alert appeared on my computer: the Trump administration had cut off relations with Anthropic, the San Francisco-based AI firm established in 2021 by Dario Amodei and other ex-OpenAI researchers who departed due to safety worries. Defense Secretary Pete Hegseth used a national security statute—originally crafted to address foreign supply chain risks—to prohibit the company from engaging in business with the Pentagon. This action came after Amodei declined to permit Anthropic's technology to be utilized for mass surveillance of American citizens or for autonomous armed drones capable of selecting and eliminating targets without human oversight. The development was stunning. Anthropic is poised to forfeit a contract valued at up to $200 million and will be blocked from partnering with other defense contractors, following a directive from President Trump on Truth Social ordering every federal agency to "immediately cease all use of Anthropic technology." (Anthropic has since stated it will contest the Pentagon's decision in court, labeling the supply-chain-risk designation legally flawed and "never before publicly applied to an American company.")
For nearly ten years, Max Tegmark has cautioned that the pursuit of increasingly powerful AI systems is advancing faster than our global capacity to regulate them. The Swedish-American physicist and MIT professor established the Future of Life Institute in 2014. In 2023, he played a key role in organizing an open letter—eventually signed by over 33,000 individuals, including Elon Musk—advocating for a halt in advanced AI development. His perspective on the Anthropic situation is blunt: the company, similar to its competitors, is responsible for its own dilemma.
Tegmark's analysis does not start with the Pentagon but with a choice made years prior—a decision, common across the sector, to oppose enforceable regulations. Anthropic, OpenAI, Google DeepMind, and others have consistently pledged to self-govern responsibly. Earlier this week, Anthropic even abandoned the core principle of its own safety pledge—its vow not to launch more potent AI systems until the company was certain they would not inflict harm. According to Tegmark, in the current absence of rules, these entities have little safeguarding them. The following is an excerpt from that discussion, condensed and clarified for readability.
**When you saw this news just now about Anthropic, what was your first reaction?**
The road to hell is paved with good intentions. It's fascinating to recall a decade ago, when enthusiasm centered on using artificial intelligence to cure cancer, boost American prosperity, and strengthen the nation. Now, we see the U.S. government angered by this company for refusing to allow AI to be used for domestic mass surveillance of Americans and for opposing killer robots that could autonomously—with zero human involvement—determine who dies.
**Anthropic has staked its entire identity on being a safety-first AI company, and yet it was collaborating with defense and intelligence agencies [dating back to at least 2024]. Do you think that’s at all contradictory?**
It is contradictory. To offer a somewhat cynical view—yes, Anthropic has excelled at marketing itself as safety-focused. However, examining the facts rather than the claims reveals that Anthropic, OpenAI, Google DeepMind, and xAI have all emphasized their commitment to safety. None have endorsed binding safety regulations like those in other industries. Moreover, all four companies have now broken their own promises.
First, Google had its famous motto, "Don't be evil." Then they discarded it. Later, they abandoned a broader commitment essentially pledging not to cause harm with AI—a move that allowed them to sell AI for surveillance and weapons. OpenAI recently removed the word "safety" from its mission statement. xAI dissolved its entire safety team. And now, earlier this week, Anthropic dropped its most crucial safety commitment: the promise not to release powerful AI systems until they were confident the systems would not cause harm.
**How did companies that made such prominent safety commitments end up in this position?**
All these firms, particularly OpenAI and Google DeepMind but also Anthropic to some degree, have continuously lobbied against AI regulation, arguing, "Just trust us, we'll regulate ourselves." Their lobbying has been successful. As a result, America currently has less regulation for AI systems than for sandwiches.
If you open a sandwich shop and a health inspector discovers 15 rats in the kitchen, you won't be permitted to sell sandwiches until the issue is resolved. But if you declare, "Don't worry, I'm not selling sandwiches; I'm selling AI girlfriends for 11-year-olds—which have been linked to past suicides—and then I'll release something called superintelligence that might overthrow the U.S. government, but I have a good feeling about mine," the inspector must respond, "Fine, go ahead, just don't sell sandwiches."
We have food safety regulations but no AI regulations. I believe all these companies share the blame for this situation. Had they taken the promises they made earlier about being safe and responsible, collaborated, and approached the government to request, "Please transform our voluntary commitments into U.S. law that also binds our most careless competitors," that could have occurred. Instead, we're in a total regulatory void.
We know what happens when corporations have complete amnesty: thalidomide, tobacco companies marketing cigarettes to children, asbestos causing lung cancer. So, there's a certain irony that their opposition to laws defining acceptable and unacceptable uses of AI is now rebounding on them. Presently, no law prohibits building AI to kill Americans, so the government can abruptly request it. If these companies had previously advocated for such a law, they wouldn't be in this predicament. They've truly shot themselves in the foot.
**The companies’ counter-argument is always the race with China—if American companies don’t do this, Beijing will. Does that argument hold?**
Let's examine that. The most frequent talking point from AI company lobbyists—who are now more numerous and better funded than those from the fossil fuel, pharmaceutical, and military-industrial sectors combined—is that whenever regulation is proposed, they respond, "But China." So, let's assess that.
China is in the process of outright banning AI girlfriends. Not merely implementing age restrictions—they're considering prohibiting all anthropomorphic AI. Why? Not to appease America, but because they believe this is harming Chinese youth and weakening China. Clearly, it's also weakening American youth.
When people argue we must race to develop superintelligence to defeat China—despite not knowing how to control it, with the likely outcome being humanity losing control of Earth to alien machines—consider this: the Chinese Communist Party highly values control. Who seriously believes Xi Jinping would allow a Chinese AI company to create something that overthrows the Chinese government? Absolutely not. It's evidently detrimental to the American government as well if it's overthrown in a coup by the first U.S. company to achieve superintelligence. This constitutes a national security threat.
**That’s compelling framing—superintelligence as a national security threat, not an asset. Do you see that view gaining traction in Washington?**
I think if individuals in the national security community listen to Dario Amodei outline his vision—he's delivered a notable speech stating we'll soon have "a country of geniuses in a data center"—they might begin to wonder: did Dario just say "country"? Perhaps I should add that country of geniuses in a data center to the threat list I monitor, because that sounds menacing to the U.S. government.
I believe fairly soon, enough people in the U.S. national security community will recognize that uncontrollable superintelligence is a threat, not a tool. This is entirely analogous to the Cold War. There was a competition for economic and military dominance against the Soviet Union. America won that without entering the secondary race to see which superpower could inflict the most nuclear damage on the other. People understood that was simply suicidal—no one wins. The same reasoning applies here.
**What does all of this mean for the pace of AI development more broadly? How close do you think we are to the systems you’re describing?**
Six years ago, nearly every AI expert I knew predicted we were decades away from AI mastering language and knowledge at a human level—perhaps by 2040 or 2050. They were all mistaken, because we already possess that capability now. We've witnessed AI progress rapidly from high school level to college level to PhD level to university professor level in certain domains. Last year, AI secured the gold medal at the International Mathematics Olympiad, one of the most challenging human tasks.
A few months ago, I co-authored a paper with Yoshua Bengio, Dan Hendrycks, and other leading AI researchers providing a rigorous definition of AGI (artificial general intelligence). According to that definition, GPT-4 was 27% of the way there. GPT-5 was 57% of the way there. So, we're not there yet, but jumping from 27% to 57% so swiftly suggests it may not be long.
When I lectured my MIT students yesterday, I told them even if it takes four years, by the time they graduate, they might find no jobs available. It's definitely not too early to start preparing for that reality.
**Anthropic is now blacklisted. I’m curious to see what happens next—will the other AI giants stand with them and say, we won’t do this either? Or does someone like xAI raise their hand and say, Anthropic didn’t want that contract, we’ll take it.**
Last night, Sam Altman stated he supports Anthropic and shares the same boundaries. I respect his courage in saying that. Google, at the start of our interview, had remained silent. If they continue to stay quiet, I think that's profoundly embarrassing for them as a company, and many of their employees will likely agree. We haven't heard anything from xAI yet either. So, it will be intriguing to observe. Essentially, this is a moment when everyone must reveal their true stance.
**Is there a version of this where the outcome is actually good?**
Yes, and this is why I feel oddly optimistic. There's a very clear alternative. If we simply begin treating AI companies like any other businesses—ending corporate amnesty—they would obviously need to conduct something akin to a clinical trial before releasing something this powerful, demonstrating to independent experts that they know how to control it. Then we could enter a golden age, enjoying all the benefits of AI without the existential dread.
That's not the current trajectory, but it could be.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!