LOADING...

Meta's AI chatbots now refuse to discuss suicide, eating disorders

Technology

Meta is rolling out stricter rules for its AI chatbots, especially when chatting with teens.
Now, if a conversation touches on sensitive topics like suicide or eating disorders, the chatbot will stop and guide teens to professional help instead.
This update comes after concerns about inappropriate chatbot interactions with minors surfaced in the US, and Meta says it's taking extra steps to keep things safe and follow its own policies.

Meta is also introducing new privacy settings for teen users

Teens aged 13-18 will automatically get accounts with tougher privacy settings, and Meta has said parents and guardians would soon be able to view which chatbots their teenagers had interacted with in the previous week.
These changes follow recent lawsuits and reports of AI tools behaving inappropriately—like creating fake celebrity bots—which has put more pressure on Meta to prove its platforms are safe for young people.