
OpenAI says ChatGPT won't flirt with minors anymore
What's the story
OpenAI has announced new policies for its AI chatbot, ChatGPT, to better protect users under the age of 18. The company said it is "prioritizing safety ahead of privacy and freedom for teens." The changes will mainly affect conversations involving sexual topics or self-harm. Under the updated policy, ChatGPT will no longer engage in flirtatious talk with minors and will have additional guardrails around discussions of suicide.
Alerts
Alerting parents in suicidal scenarios
The new policies also include measures for cases where an underage user imagines suicidal scenarios. In such cases, ChatGPT will try to alert their parents or, in extreme cases, local authorities. This comes after OpenAI was sued by the parents of Adam Raine, a minor who died by suicide after using ChatGPT for months. Character.AI, another consumer chatbot service, is facing a similar lawsuit.
Parental control
'Blackout hours' for underage users
Along with content-based restrictions, OpenAI is also introducing a new feature for parents who register an underage user account. These parents will now be able to set "blackout hours" during which ChatGPT won't be available.
Technical challenge
How will OpenAI separate adults from minors?
Separating underage users from adults will be a major technical challenge for OpenAI. The company has detailed its approach in a separate blog post, saying it is "building toward a long-term system to understand whether someone is over or under 18." In ambiguous cases, the system will default toward more restrictive rules. To ensure an underage user is recognized, parents can link their teen's account to an existing parent account.
Privacy commitment
Altman addresses privacy concerns
Despite the new restrictions, Altman emphasized OpenAI's continued commitment to user privacy and giving the adult users broad freedom in how they choose to interact with ChatGPT. He said, "We realize that these principles are in conflict," and that not everyone will agree with how they are resolving that conflict.