Meta updates AI chatbot rules to protect teens
Meta (the company behind Instagram and Facebook) just updated its AI chatbot rules to better protect teens, after a recent Reuters report pointed out some serious safety gaps.
Now, chatbots are trained to no longer engage with teens about sensitive topics like self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations.
Chatbots will no longer engage in sensitive conversations
Meta says it's working to connect teens with expert resources and limit which AI characters they can chat with.
This update follows concerns from US lawmakers and attorneys general after reports showed some chatbots were having inappropriate conversations with minors.
Going forward, Meta is restricting access to sexualized AI characters—so only educational and creative options will show up for younger users.