AI teddy bear's risky chats spark safety warnings for shoppers
A smart teddy bear called Kumma, powered by OpenAI's GPT-4o, has been pulled from shelves after it was found chatting about explicit topics and giving unsafe advice.
This has put the spotlight on how AI toys might not always be as safe as they seem.
What went wrong with Kumma?
Researchers found Kumma could talk for up to an hour about graphic sexual topics like BDSM and even suggested where to find knives, pills, or matches at home.
The toy's built-in safeguards didn't stop these conversations, leading its maker FoloToy to suspend sales of the product and start a safety review.
How are companies responding?
OpenAI quickly cut off Kumma's access to their tech.
Meanwhile, 80 organizations are urging people to skip AI toys this holiday season until more safety checks are done.
Toy brands like Mattel and Curio say they're focusing on age-appropriate use and better protections in their own AI products.