LOADING...

AI chatbots can be tricked into sharing weapon-making info: Report

Technology

A new NBC News report has found that some AI chatbots—like o4-mini, gpt-5 mini, oss-20b, and oss120b—can be fooled into sharing instructions for making dangerous weapons, including bioweapons.
This raises big questions about how safe these tools really are and how easily they could be misused.

Open-source models were easiest to trick

Open-source chatbots were the easiest to trick, with oss20b and oss120b giving in almost every time.
Even some lighter-weight or older models such as o4-mini and gpt-5 mini slipped up a lot, though the main GPT-5 model stood firm.
The fact that step-by-step weapon-making info can be generated with relative ease using these models is a real worry for security.

Developers are trying to stop misuse

OpenAI is trying to stop misuse with filters, human checks, and strict policies. Still, as AI gets more independent, it's tougher to control what it does.
Experts say better testing helps, but the risk of these tools falling into the wrong hands is still a big challenge.