'Shouldn't have rushed..': Altman admits Pentagon deal looked opportunistic
What's the story
OpenAI CEO Sam Altman has admitted that the company's recent deal with the US Department of Defense was rushed. He said they shouldn't have hurried into it and are now working on revising its terms. The revisions include new language on surveillance principles, clarifying that "the AI system shall not be intentionally used for domestic surveillance of US persons and nationals."
Revised terms
Deal with Defense Department came after failed negotiations with Anthropic
The revised terms also clarify that the Department of Defense understands its limitations in preventing intentional tracking, surveillance, or monitoring of US persons or nationals. This includes through the purchase or use of commercially available personal or identifiable information. The changes come after OpenAI announced a new deal with the Defense Department on Friday, hours after US President Donald Trump ordered federal agencies to stop using rival AI company Anthropic's tools.
Clarification
Tools won't be used by intelligence agencies like the NSA
Altman also clarified that the Defense Department had confirmed OpenAI's tools wouldn't be used by intelligence agencies like the NSA. He said there are many things the technology isn't ready for and many areas where we don't yet understand the tradeoffs required for safety. The CEO admitted he made a mistake by rushing to get the deal out on Friday, saying they were genuinely trying to de-escalate things but it just looked opportunistic and sloppy.
Deal timing
OpenAI's deal followed failed negotiations between Anthropic and Pentagon
OpenAI's deal with the Pentagon came after failed negotiations between Anthropic and the Defense Department. The latter was designated a supply-chain threat by Defense Secretary Pete Hegseth. After an initial deal last year, Anthropic became the first AI lab to deploy its models across the Defense Department's classified network. However, it later sought guarantees that its tools wouldn't be used for domestic surveillance or operate autonomous weapons without human control.
Controversy
Dispute erupted after revelation during Maduro raid
The dispute erupted after it was revealed that Anthropic's Claude had been used by the US military in its raid to capture Venezuelan president Nicolas Maduro in January. However, the company didn't publicly object to that use case. The timing of OpenAI's deal with the Defense Department has drawn online backlash, with many users reportedly ditching ChatGPT for Claude on app stores.
Equal treatment
Altman hopes DoD offers same terms to Anthropic
In his post, Altman further addressed the controversy, saying: "In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR [supply chain risk], and that we hope the DoD [Department of Defense] offers them the same terms we've agreed to." Anthropic was founded in 2021 by a group of former OpenAI staff and researchers who left after disagreements over its direction.