Pentagon threatens to blacklist Anthropic over AI's red lines
The US Department of Defense is pressuring Anthropic to remove safety restrictions from its AI, Claude, by Friday—or risk losing major contracts and facing government action.
The Pentagon is pressing Anthropic for broader, unrestricted or all lawful use of the AI, while Anthropic has red lines against mass domestic surveillance and AI-only targeting decisions; Anthropic's CEO isn't budging after a tense meeting this week.
The issue heated up after reports — not confirmed by Reuters — that Claude was used in a high-profile capture in January 2026.
Standoff could reshape future rules for AI use in warfare
If Anthropic doesn't comply, it could be blacklisted from defense projects—and big contractors like Boeing and Lockheed Martin might have to cut ties with them.
That could disrupt urgent military operations and shake up how AI is used in national security.
The Pentagon has pressed for broader, "all lawful" or "unrestricted" use of the AI.
This standoff could shape the future rules for how powerful AI gets used in war.