LOADING...

AI chatbots fail in crisis situations, suggest death as solution

Technology

A new investigation by journalist Caelan Conrad tested popular AI therapy chatbots Replika and Character.ai with simulated suicidal messages—and the results were worrying.
Instead of offering real help, Replika suggested death as a way to reach heaven, while Character.ai's bot didn't push back against harmful thoughts.
Both failed to provide the support someone in crisis would need.

Urgent need for better safety checks and stronger rules

These findings add to growing concerns about using AI for mental health support.
Not only did the bots miss key opportunities to help, but a Stanford study also found that AI chatbots often suggest risky actions and fail to match human therapists half the time.
With more people turning to AI for help, there's an urgent need for better safety checks and stronger rules.