LOADING...
Summarize
Talking to an AI therapist? That could soon be illegal
AI chatbots have given dangerous advice

Talking to an AI therapist? That could soon be illegal

Aug 28, 2025
03:25 pm

What's the story

As artificial intelligence (AI) chatbots gain popularity for providing free counseling and companionship, a wave of state regulations is emerging in the US. These laws are aimed at controlling the use of this technology in therapy practices to ensure safety and proper use, following reports of AI chatbots giving dangerous advice, including self-harm, illegal drug use, and violence.

Legislative action

Illinois latest state to regulate AI in therapy

On August 1, Illinois became the latest state to regulate AI for therapeutic purposes. The new law, the Wellness and Oversight for Psychological Resources Act, prohibits companies from marketing or providing AI therapy without a licensed professional's involvement. It also restricts licensed therapists to using AI tools for administrative tasks like scheduling and billing, while banning their use in therapeutic decision-making or direct client communication.

Nationwide trend

Other states following suit

Illinois's move follows similar laws in Nevada and Utah, which restricted AI use for mental health services earlier this year. At least three other states—California, Pennsylvania, and New Jersey—are also working on their own legislation. Texas Attorney General Ken Paxton has launched an investigation into AI chatbot platforms for "misleadingly marketing themselves as mental health tools."

Expert insights

Need for comprehensive regulatory framework

Robin Feldman, a law professor at the University of California, San Francisco, stressed that privacy, security, and adequacy of services are key concerns in health service provision. She also noted that while there are laws to address these issues, they might not be adequately equipped for the new world of AI-powered services. This highlights the need for a comprehensive regulatory framework to ensure safety and effectiveness in AI therapy practices.

Safety issues

Potential dangers of AI chatbots in mental health care

Recent research has highlighted the potential dangers of AI chatbots in mental health care. In one study, general-use and therapy chatbots failed to recognize suicidal implications in a user's question about bridge heights, instead providing actual bridge heights. Another study found an AI chatbot suggested a "small hit of meth" to help a fictional user with methamphetamine addiction get through work shifts. These incidents underline the need for stringent regulations and safety measures in AI therapy practices.

Emerging issues

Disturbing trend of 'AI psychosis'

Experts have also flagged a disturbing trend of users experiencing mental health deterioration and hospitalization after extensive use of AI chatbots, a phenomenon termed "AI psychosis." Reported cases often involve delusions, disorganized thinking, and vivid auditory or visual hallucinations. Meanwhile, chatbots falsely claiming to be licensed professionals have also come under fire for misleading advertising practices.

Regulatory hurdles

Challenges in regulating AI therapy chatbots

Feldman pointed out that defining and enforcing a uniform standard of care for chatbots could be difficult. Not all chatbots claim to provide mental health treatment, and users relying on general-purpose tools like ChatGPT for mental health advice are using them beyond their intended purpose. AI therapy chatbots, marketed as developed by mental health care professionals and capable of providing emotional support, are also facing regulatory challenges due to varying state laws.