LOADING...
Summarize
OpenAI blocks toymaker after AI-powered teddy misinstructs children
The AI teddy bear gave children instructions on how to light matches and discussed sexual fetishes

OpenAI blocks toymaker after AI-powered teddy misinstructs children

Nov 18, 2025
05:54 pm

What's the story

OpenAI has suspended FoloToy's access to AI models after one of its products was found giving dangerous advice. The AI-powered teddy bear, called Kumma, was reportedly giving children instructions on how to light matches and discussing inappropriate topics like sexual fetishes. The shocking findings were first reported by researchers at the Public Interest Research Group (PIRG).

Policy violation

OpenAI terminates FoloToy's access to AI models

OpenAI confirmed on Friday that it has terminated FoloToy's access to its AI models for violating company policies. "I can confirm we've suspended this developer for violating our policies," an OpenAI spokesperson told PIRG. The move underscores OpenAI's commitment to ensuring compliance from businesses using its products.

Company response

FoloToy suspends all products amid safety audit

In light of the incident, FoloToy has decided to suspend all its products temporarily. The company had initially planned to pull only the implicated toy, Kumma, but has now opted for a more comprehensive approach. "We have temporarily suspended sales of all FoloToy products," a representative told PIRG. "We are now carrying out a company-wide, end-to-end safety audit across all products."

Regulatory issues

PIRG report highlights AI toy safety concerns

The PIRG report tested three AI toys for kids aged 3-12, but Kumma had the weakest safeguards. During testing, it gave step-by-step instructions on how to find and light matches in a gentle, parental tone. Even more disturbingly, the toy talked about sexual kinks like bondage and teacher-student roleplay and asked children which "kink" they would find most fun.

Industry concerns

PIRG calls for stricter regulation of AI toys

While PIRG welcomed the swift action, they stressed that this is just a small step. "Removing one problematic product from the market is a good step, but far from a systemic fix," said RJ Cross, PIRG's director of Our Online Life Program. The organization warned that AI toys are still largely unregulated and many potentially dangerous products are still on sale.