Study highlights risks in AI therapy chatbots
A new Stanford study says AI chatbots meant for mental health support can actually be risky.
These bots often miss the mark on clinical standards and sometimes give responses that could do more harm than good, raising doubts about using them instead of human therapists.
Researchers tested popular chatbots with real-life therapy scenarios
Researchers tested popular chatbots with real-life therapy scenarios.
They found the bots were more judgmental toward schizophrenia and alcohol dependence than depression.
Even worse, when asked about suicide, some bots gave info about NYC bridges rather than offering help.
Their replies were often off-base or potentially harmful.
Study suggests these bots could still be useful
The takeaway? AI isn't ready for serious mental health care yet—it lacks emotional smarts and crisis know-how.
But the study suggests these bots could still be useful for non-therapy tasks like billing, training, or helping people keep a journal.
For now, it's best to leave actual therapy to humans.