Pentagon may end AI partnership with Anthropic over usage restrictions
What's the story
The US Department of Defense is mulling the termination of its partnership with artificial intelligence firm Anthropic. The deliberation comes after Anthropic's insistence on maintaining certain restrictions on how the US military can utilize its models, according to Axios. The Pentagon has been negotiating with four AI companies including OpenAI, Google, and xAI for unrestricted access to their tools for "all lawful purposes."
Company response
Anthropic's response to Pentagon's concerns
Anthropic has not accepted the Pentagon's terms, which has led to frustration after months of negotiations. An Anthropic spokesperson clarified that the company hasn't discussed the use of its AI model Claude for specific operations with the Pentagon. The spokesperson added that discussions with the US government have so far focused on a limited set of usage policy questions, including strict limits around fully autonomous weapons and mass domestic surveillance.
Network access
Push for unrestricted access to AI tools
The Pentagon is also pushing major AI companies, including OpenAI and Anthropic, to provide their artificial intelligence tools on classified networks. This would be done without many of the standard restrictions that these companies usually impose on users. The move is part of a broader strategy by the US military to leverage advanced technology in its operations.
Operational use
Claude's role in capturing Maduro
Anthropic's AI model, Claude, was used in a US military operation to capture former Venezuelan President Nicolas Maduro. The deployment of Claude was made possible through Anthropic's partnership with data firm Palantir. This incident highlights the potential applications of advanced AI technology in real-world military operations and the ongoing debate over its ethical implications and usage restrictions.