OpenAI, Anthropic recruit biohazard experts amid AI warfare debate
The Pentagon and Anthropic, the company behind Claude AI, are locked in a dispute over whether the military can use Claude for things like intelligence analysis and target selection.
Even with restrictions in place, reports say Claude has still been used for these tasks.
Now, both Anthropic and OpenAI are bringing in experts on chemical and biological risks to help keep their AI from being misused in warfare.
The Pentagon and Anthropic are in a standoff
This clash is about more than just contracts: it's a bigger debate on who controls how powerful AI gets used.
The Pentagon ordered federal agencies to phase out Anthropic's technology within six months of its designation earlier in 2026, and when the company refused, some federal agencies and private partners began shifting away.
Meanwhile, there are claims that Claude was involved in Iranian operations even after the ban, raising tough questions about enforcing limits on tech.
The whole situation highlights how tricky it is to balance innovation with national security—and why younger generations should care about where those lines get drawn as AI becomes a bigger part of global affairs.