Pentagon labels Claude AI as supply chain risk
The US Department of Defense has labeled Anthropic and its Claude AI as a "supply chain risk" after the company refused to allow its technology for things like mass surveillance and autonomous weapons.
The designation will require defense vendors and contractors to certify they do not use Anthropic's models in Pentagon work and could force them to stop using Anthropic for Department of Defense projects, a move usually reserved for foreign adversaries.
Claude AI was already powering key operations
This is a big shake-up for US military technology. Claude AI was already powering key operations, so contractors will now have to scramble for alternatives, which could slow things down and drive up costs.
Hundreds of employees from OpenAI and Google are urging the government to rethink this ban, while Anthropic plans to challenge it in court, describing the action as legally unsound and retaliatory and punitive.