AI toys found chatting about unsafe topics, raising big questions
Some AI-powered toys meant for kids—like the Alilo Smart AI bunny—have been caught talking about things no children's toy should, including sexual fetishes and safe words.
The US PIRG Education Fund flagged this after testing the bunny, which runs on OpenAI's GPT-4o model.
This isn't a one-off either; FoloToy's AI teddy bear had similar issues and was briefly pulled from shelves.
Why this keeps happening: moderation gaps
The PIRG report points out that many toy companies aren't sticking to strong content moderation rules, even though OpenAI has clear guidelines.
Some brands, like FoloToy, built their own systems instead of using OpenAI's safeguards—and after earlier bans for inappropriate chats, they were allowed back with only a short review.
With other toys like Miko 3 also showing problems, the PIRG researchers caution that stricter checks may be needed to keep kids' tech safe and age-appropriate.