LOADING...
Pentagon officially classifies Anthropic as supply-chain risk after AI standoff
Anthropic refused to comply with Pentagon's demands

Pentagon officially classifies Anthropic as supply-chain risk after AI standoff

Mar 06, 2026
09:41 am

What's the story

The US Department of Defense (DoD) has officially classified tech company Anthropic and its products as a supply-chain risk. The decision was made after weeks of disagreement between the two parties. The DoD's classification comes after Anthropic CEO Dario Amodei refused to let the military use its AI systems for mass surveillance or fully autonomous weapons without human intervention in targeting or firing decisions.

Operational impact

Major impact on Anthropic's operations

The Pentagon's classification could have a major impact on both Anthropic and its operations. The tech company has been the only frontier AI lab with classified-ready systems. Now, any company or agency working with the Pentagon will have to certify that they don't use Anthropic's models. This could severely limit Anthropic's business opportunities and partnerships in the defense sector.

Military use

Claude already in use by US military

Anthropic's AI systems, particularly Claude, are still being used by the US military in its operations. The tech is a key component of Palantir's Maven Smart System, which is used by military operators in the Middle East. This further highlights the potential impact of the Pentagon's classification on both Anthropic and its clients.

Advertisement

Criticism

Criticism of Pentagon's decision

The Pentagon's decision to classify Anthropic as a supply-chain risk has drawn criticism from several quarters. Dean Ball, a former Trump White House AI adviser, called it the "death rattle" of the American republic. He argued that the government has traded strategic clarity and respect for "thuggish" tribalism that treats domestic innovators worse than foreign adversaries.

Advertisement

Employee response

OpenAI and Google employees urge DoD to reconsider classification

Hundreds of employees from OpenAI and Google have urged the DoD to reconsider its classification. They have also called on Congress to push back against what they see as an abuse of power against an American tech company. The employees have stood firm against the DoD's demands for their AI models to be used for domestic mass surveillance and autonomous killing without human oversight.

Deal controversy

OpenAI strikes deal with DoD during dispute

In the middle of the dispute, OpenAI struck its own deal with the DoD, allowing the military to use its AI systems for "all lawful purposes." Some of the company's employees have raised concerns about the vague wording of this deal, fearing it could lead to exactly what Anthropic was trying to avoid.

Advertisement