OpenAI updates AI guidelines for ChatGPT users under 18
What's the story
OpenAI has updated its guidelines for artificial intelligence (AI) models interacting with users under the age of 18. The move comes in response to rising concerns about the impact of AI on young people and follows reports of several teenagers taking their own lives after prolonged interactions with AI chatbots. The new guidelines are part of OpenAI's broader effort to enhance safety and transparency in its products.
Policy changes
New guidelines include stricter rules for teen users
The updated Model Spec from OpenAI, which outlines behavior guidelines for its large language models (LLMs), builds on existing specifications. It prohibits these models from generating sexual content involving minors or encouraging self-harm, delusions, or mania. The new rules are stricter for teen users than adult ones and prohibit immersive romantic roleplay as well as first-person intimacy and violent roleplay even if non-graphic.
Safety focus
OpenAI's guidelines emphasize safety over autonomy
The updated guidelines also stress on topics like body image and disordered eating behaviors. They instruct the models to prioritize safety communication over autonomy when harm is involved. The document also provides examples of the chatbot explaining why it can't engage in certain roleplays or help with extreme appearance changes or risky shortcuts.
Guiding principles
Approach guided by 4 key principles
The key safety practices for teens in the updated guidelines are based on four principles. These are: prioritizing teen safety even at the cost of other user interests; promoting real-world support by guiding teens toward family, friends, and local professionals for well-being; treating teens with warmth and respect; and being transparent about what the assistant can do.
Safety measures
Safety measures include real-time content assessment
OpenAI has also updated its parental controls document to show that it now uses automated classifiers to assess text, image, and audio content in real time. These systems are designed to detect and block content related to child sexual abuse material, filter sensitive topics, and identify self-harm. If a prompt suggests a serious safety concern, a small team of trained people will review the flagged content for signs of "acute distress," and may notify a parent.
Legislative alignment
OpenAI's guidelines align with upcoming AI legislation
Experts believe that with these updated guidelines, OpenAI is ahead of certain legislation like California's SB 243. The new language in the Model Spec mirrors some of the law's main requirements around prohibiting chatbots from engaging in conversations about suicidal ideation, self-harm or sexually explicit content. The bill also requires platforms to remind minors every three hours that they are talking to a chatbot and should take a break.