OpenAI Strikes Deal to Deploy AI Models on Pentagon's Classified Network
By admin | Feb 28, 2026 | 3 min read
In a Friday evening announcement, OpenAI CEO Sam Altman revealed that his company has finalized a deal permitting the Department of Defense to utilize its AI models within the department's classified network. This development comes after a notable public dispute between the Pentagon—referred to as the Department of War during the Trump administration—and OpenAI's competitor, Anthropic.
The Pentagon had been urging AI firms, including Anthropic, to permit their models to be used for "all lawful purposes." Anthropic, however, aimed to establish clear boundaries against mass domestic surveillance and fully autonomous weapon systems. In a detailed statement on Thursday, Anthropic CEO Dario Amodei clarified that the company "never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." He contended that "in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."
This stance garnered support from more than 60 OpenAI employees and 300 Google employees, who signed an open letter this week urging their employers to back Anthropic's position. Following the collapse of talks between Anthropic and the Pentagon, former President Donald Trump criticized the "Leftwing nut jobs at Anthropic" in a social media post. He also instructed federal agencies to cease using the company's products after a six-month phase-out period.
In a separate post, Secretary of Defense Pete Hegseth accused Anthropic of attempting to "seize veto power over the operational decisions of the United States military." Hegseth further announced he is classifying Anthropic as a supply-chain risk, stating, "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
On Friday, Anthropic responded that it had "not yet received direct communication from the Department of War or the White House on the status of our negotiations," but affirmed it would "challenge any supply chain risk designation in court."
Interestingly, Altman asserted in a post on X that OpenAI's new defense contract incorporates safeguards addressing the very concerns that sparked the conflict with Anthropic. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman said. "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
Altman added that OpenAI "will build technical safeguards to ensure our models behave as they should, which the DoW also wanted," and will assign engineers to work with the Pentagon "to help with our models and to ensure their safety."
"We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept," Altman continued. "We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements."
According to reporting by Fortune's Sharon Goldman, Altman informed OpenAI employees at an all-hands meeting that the government will permit the company to construct its own "safety stack" to prevent misuse. He noted that "if the model refuses to do a task, then the government would not force OpenAI to make it do that task."
Altman's statement was released shortly before news emerged that the U.S. and Israeli governments have commenced bombing Iran, with Trump calling for the overthrow of the Iranian government.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!