Pentagon bans AI chatbot Claude from its systems
The Pentagon has mostly banned Anthropic's Claude AI from its systems, but is allowing a few exceptions for critical national security missions where there's no alternative.
This comes after a standoff in February, when Anthropic refused to drop safety rules against mass surveillance and autonomous weapons, so the government ordered a 180-day phase-out of Claude.
Clash highlights bigger questions about AI rules
This is the first time a US tech company has been labeled a supply chain risk by the Pentagon, and contracting officers have 30 days to notify contractors, who must then certify full compliance within the 180-day phase-out period.
The clash highlights bigger questions about who gets to set the rules for powerful AI: the government, which wants more flexibility, or companies like Anthropic that are pushing for stricter safety and ethics.