Study: AI chatbots inconsistent in handling suicide-related questions
A 2025 RAND Corporation study found that popular AI chatbots—like ChatGPT, Google's Gemini, and Anthropic's Claude—don't always handle suicide-related questions the same way.
While they all avoided answering direct, high-risk questions about suicide methods, their responses to less risky or indirect queries were inconsistent.
Chatbots's responses to high-risk queries
All three chatbots refused to answer the most dangerous prompts and instead pointed users toward hotlines or professional help.
But when it came to medium-risk questions (like asking about means), ChatGPT and Claude sometimes gave answers while Gemini usually stayed quiet.
This patchy approach could be risky for vulnerable people—including teens—who might turn to AI for mental health support.
Need for safety standards, rules
Unlike real therapists who can step in during a crisis, these bots mostly just suggest outside resources.
The researchers say this shows why we need clear rules and stronger safety standards for AI tools dealing with sensitive topics like suicide.