OpenAI unveils child safety blueprint to address US AI abuse
Technology
OpenAI just rolled out its Child Safety Blueprint, aiming to tackle the growing problem of child exploitation using AI in the US.
This move follows a 14% spike in AI-generated child abuse content reported last year, with offenders using technology to create fake explicit images and send harmful messages.
OpenAI seeks stronger laws and safeguards
Created with help from the National Center for Missing and Exploited Children and state attorneys general, the plan pushes for stronger laws against AI misuse and better ways to report abuse.
OpenAI is also making sure its models can't generate unsafe or explicit content, part of its ongoing effort to keep young people safer online.