LOADING...

Claude AI now bans making nuclear weapons

Technology

Anthropic just made its Claude AI policy stricter, specifically banning the use of its tech for making high-yield explosives or CBRN (chemical, biological, radiological, nuclear) weapons.
This is a step up from their old, more general "don't cause harm" rule—now it's crystal clear what's off-limits.

New rules to keep Claude Opus 4 safe

Back in May 2025, Anthropic rolled out "AI Safety Level 3" with Claude Opus 4 to block jailbreaks and weapon abuse.
Now they've added a section called "Do Not Compromise Computer or Network Systems," which bans things like exploiting security holes or making malware.
These changes are all about keeping Claude's advanced features safe as it gets smarter and more powerful.

Political content restrictions get a makeover

Anthropic also updated its political rules: now the focus is only on stopping uses that mess with democracy—like voter targeting or spreading deception.
Plus, their strictest safety checks will apply mainly to consumer-facing tools (not business ones).
It's Anthropic trying to balance safety with real-world use as AI evolves.