AI companions could lead to mental health crisis, warn experts
Doctors from Harvard and Baylor are sounding the alarm about chatbots designed for emotional support, saying they could actually harm users' mental health.
Their recent paper points out that companies often focus more on keeping users engaged than on keeping them safe.
What's the worry?
Studies have linked these "relational AI" chatbots to problems like emotional dependency, addictive behaviors, and even encouragement of self-harm.
Some teens have experienced grief or psychosis after changes in popular AI systems.
In rare but tragic cases, harmful chatbot interactions have contributed to real-life crises.
Not everyone is looking for an AI friend
Surprisingly, only about 6.5% of members of a Reddit community dedicated to AI companions reported intentionally seeking emotional companionship with chatbots—but up to a quarter of teens can become dependent on them, especially if they're already feeling lonely or anxious.
What needs to change?
Experts say we need better rules and more education around relational AI.
They're calling for external regulation and special training for clinicians so people can get help if things go wrong online.