AI therapy chatbots work best when users feel 'emotional closeness'
A new study found AI therapy chatbots help most when people actually feel connected to them—many users even called these bots their "friends" or "therapists."
Feeling safe enough to open up and get supportive feedback was associated with positive changes in thinking and wellbeing, such as increased self-confidence and higher energy levels.
The flip side: Getting too attached
Researchers warn that getting emotionally attached to a chatbot—something they call "synthetic intimacy"—can be risky.
If the bot doesn't challenge harmful thoughts or spot serious issues, people might not get the real help they need.
Developers and policymakers are being urged to add safeguards so anyone needing clinical care gets referred in time.
Filling the gap, but not replacing humans
AI chatbots like Wysa and Limbic are helping lots of UK adults manage mental health, especially for those waiting on support.
They're handy for meeting demand, but they can't fully understand complex feelings like a human therapist can—so it's important these tools fit into bigger mental health systems thoughtfully.