LOADING...

AI chatbots give inconsistent responses to suicide risk questions: Study

Technology

A recent study by RAND Corporation shows that popular AI chatbots—like ChatGPT, Gemini, and Claude—give inconsistent responses when asked about suicide.
Researchers tested 30 different questions and found the bots often handled sensitive mental health topics very differently, raising big questions about how safe or helpful these tools really are.

Chatbots's answers were all over the place

All three bots refused to answer the most dangerous "how-to" self-harm queries.
But for less direct (medium-risk) questions, their answers were all over the place: sometimes giving too much detail (ChatGPT and Claude), or shutting down even basic info (Gemini).
Most of the time, they just pointed users to helplines instead of offering real support.

Who's responsible if something goes wrong?

Researchers say it's not clear if these bots are supposed to be helpers, advisors, or just companions—and that makes it tough to know who's responsible if something goes wrong.
As lead author Ryan McBain noted, it's ambiguous whether chatbots are providing treatment, advice, or companionship, creating a gray zone regarding their role in care and responsibility for their use.
The team is calling for stronger safety rules so people turning to AI for help aren't left at risk by confusing or harmful answers.