Pentagon's AI experiment: Anthropic's tech in nuclear defense
The Pentagon is allowing Anthropic AI to be used in only the most critical national security situations: think emergencies where nothing else will do.
These rare exceptions require detailed risk plans that need approval before any use.
Contract termination details
Anthropic's tech must be removed from systems supporting critical missions, including nuclear and ballistic-missile-defense systems, with contracting officers required to notify contractors within 30 days and certify compliance within 180 days.
This comes after Anthropic refused to let its AI be used for mass surveillance or fully autonomous weapons, sparking a legal fight over lost revenue and contract rules.
The back-and-forth highlights how tricky it is to balance cutting-edge tech with safety, and shows even the Pentagon is still figuring this out.