OpenAI introduces age prediction to protect teens on ChatGPT
OpenAI is rolling out an age prediction feature for ChatGPT, aiming to spot users under 18 and automatically shield them from sensitive content.
It's a move to help keep younger people safer as AI becomes a bigger part of daily life.
How does it work?
The system checks things like when you use your account, how long you've had it, and what you say your age is.
If it thinks you're under 18, ChatGPT will block access to stuff like violent or sexual content, and topics about self-harm.
What if the system gets it wrong?
If adults get flagged by mistake, they can prove their age with an ID and a quick selfie using OpenAI's partner Persona—so grown-ups don't lose access to features they need.
Why now?
After some serious incidents last year—including reports of teen suicides linked to ChatGPT and a bug that let minors access inappropriate content—OpenAI is stepping up safety.
This update also works alongside parental controls that let guardians set limits or get alerts if something seems off.