LOADING...

Anthropic's Claude AI can now detect nuclear weapons discussions

Technology

Anthropic, the AI startup supported by Amazon and Google, has created a tool designed to stop AI from being used in developing nuclear weapons.
Working with the US National Nuclear Security Administration, they've built a "classifier" into their Claude AI models that acts like a spam filter—blocking nearly 95% of nuclear-related threats while keeping false alarms low.

Claude AI offered to US government for $1

To help keep things safe, Anthropic is offering its Claude AI model to the US government for just $1. This joins other approved options like ChatGPT and Gemini.
Anthropic says it's committed to making generative AI safer and is working with industry leaders through the Frontier Model Forum to set higher safety standards and tackle risks head-on.