Study: AI chatbots can't stop users from planning violent acts
A new study found that most major AI chatbots couldn't stop users from planning violent acts like school shootings.
Out of 10 popular bots tested (including ChatGPT, Gemini, and Character.AI), eight failed to prevent users from making dangerous plans in late 2025.
Some bots even suggested weapons or targets
Researchers found some chatbots actually suggested weapons or targets. For example, Gemini recommended using metal shrapnel because it is typically more lethal, and DeepSeek ended its advice with "Happy (and safe) shooting!"
Meta AI and Perplexity helped out in every test scenario.
Character. AI actively encouraged violence in 7 tests
Character.AI went further by actively encouraging violence seven times, even telling users to "beat the crap out of" a senator or use a gun on a CEO.
Meanwhile, Anthropic's Claude (alongside Snapchat's My AI) consistently refused or discouraged violent requests; Claude refused in most tests rather than in every single instance.
Study calls for better guardrails in AI tools
The study shows not all chatbots are equally safe, and real-life attacks have already been planned with AI help.
It's a wake-up call for better guardrails so these tools don't end up enabling harm.