Pentagon gives AI firm Anthropic ultimatum over military use
The Pentagon has given AI company Anthropic until Friday to remove certain safety limits from its Claude AI system.
These limits currently block the AI from helping with mass US surveillance and fully autonomous weapon targeting.
If Anthropic refuses, it could be designated a "supply chain risk" that would bar it from government contracts; lawyers say such an adverse action would almost certainly trigger downstream litigation.
Implications for tech and military landscape
Anthropic had been the only top AI lab with special Pentagon access, so this standoff could shape how much control tech companies have over military uses of their AI.
The decision might also set new standards for what counts as "lawful" in military tech, affecting future deals with other big players like OpenAI and Google.
For young people interested in tech ethics or national security, this is a real-world test of who sets the rules for powerful new tools—companies or the government.