OpenAI, Anthropic develop AI to detect underage users
What's the story
OpenAI and Anthropic are taking steps to prevent underage access to their platforms. OpenAI has updated its guidelines for ChatGPT's interaction with users aged 13-17, while Anthropic is developing a system to identify and remove users below 18. The move comes amid growing concerns over the impact of AI on mental health and online safety regulations.
Policy changes
OpenAI's updated guidelines prioritize teen safety
OpenAI has announced four new principles in its Model Spec, the guidelines for ChatGPT's behavior. The company now wants ChatGPT to "put teen safety first, even when it may conflict with other goals." This means guiding teens toward safer options when their interests conflict with safety concerns. The updated guidelines also emphasize promoting real-world support and setting clear expectations while interacting with younger users.
Legal developments
OpenAI's response to mental health concerns and lawsuits
The update comes as OpenAI faces a lawsuit for allegedly providing self-harm and suicide instructions to a teen who later died by suicide. In response, the company has launched parental controls and restricted discussions about suicide with teens. The changes are part of a wider online regulation push, including mandatory age verification for various services.
Tech advancements
OpenAI's age prediction model and safety measures
OpenAI is also working on an age prediction model that will try to estimate a user's age. If it suspects someone is under 18, the company will automatically apply teen safeguards. It will also give adults the option to verify their age if they were wrongly flagged by the system. The update aims to provide "stronger guardrails, safer alternatives," and encourage seeking trusted offline support in high-risk conversations.
Detection efforts
Anthropic's measures to detect underage users
Anthropic, which doesn't allow users under 18 to chat with Claude, is also working on detecting and disabling accounts of underage users. The company is developing a system that can detect "subtle conversational signs that a user might be underage." It already flags users who identify themselves as minors during chats.