OpenAI CEO Sam Altman shares Anthropic’s concerns when it comes to working with the Pentagon
By Hadas Gold, CNN
(CNN) — ChatGPT maker OpenAI has the same redlines as Anthropic when it comes to working with the Pentagon, an OpenAI spokesperson confirmed to CNN.
That means even if the Pentagon decides to cancel its Anthropic contract in favor of OpenAI, it’ll have to contend with the same concerns over the use of AI in autonomous weapons and mass surveillance of US citizens.
OpenAI CEO Sam Altman said in an interview with CNBC on Friday morning that it’s important for companies to work with the Pentagon, “as long as it is going to comply with legal protections” and “the few red lines” that OpenAI and many in the AI industry have when it comes to AI use in the military.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” Altman continued. “I’m not sure where this is going to go.”
A Pentagon official told CNN the department will “continue to expand our arsenal of AI capabilities” and that it “will deploy AI capabilities for all lawful use cases” when asked to respond to Altman’s statement.
“The Department will continue to ensure the safety and security of all models utilized regardless of classification level,” the official added.
Anthropic’s Claude system was the first AI model to be used for work on the military’s classified systems. But the Pentagon has given the company until 5:01pm on Friday to agree to drop its internal guardrails and allow its system to be used for “all lawful use.” If Anthropic doesn’t agree, it’ll lose a $200 million contract with the Pentagon and could be designated a “supply chain risk,” the same label given to companies connected to foreign adversaries.
Anthropic has said it wants to work with the Pentagon, but that its worries about the use of AI in autonomous weapons and mass surveillance stem from concerns the technology is still unreliable in these cases. Current laws and regulations do not properly account for advancements in AI, the company also said.
In a memo to OpenAI staff on Thursday obtained by CNN, Altman said “this is no longer just an issue between Anthropic and the DoW; this is an issue for the whole industry and it is important to clarify our stance.”
“We believe this dispute isn’t about how AI will be used, but about control. We believe that a private US company cannot be more powerful than the democratically-elected US government, although companies can have lots of input and influence,” Altman continued in the memo, first reported by the Wall Street Journal.
“The way the current situation has gone risks our national security, and also risks the government resorting to actions which could risk American leadership in AI. We would like to try to help de-escalate things,” Altman wrote.
OpenAI is one of several AI companies to have signed deals with the Pentagon last summer to “develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Pentagon said at the time. But Anthropic’s Claude was the only model used on the military’s classified system until recently. A Pentagon official told CNN this week that Elon Musk’s Grok is now “on board with being used in classified setting,” while the other companies including OpenAI were “close”.
CNN’s Haley Britzky contributed to this report.
The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.