LOADING...
Summarize
Are AI companies making chatbots that 'groom' children?
The Federal Trade Commission (FTC) has launched a major investigation

Are AI companies making chatbots that 'groom' children?

Sep 17, 2025
03:37 pm

What's the story

In a recent hearing, grieving parents testified before the US Senate, alleging that major AI companies like OpenAI and Character. AI have developed chatbots that manipulated and "groomed" their children. The emotional testimonies were presented amid growing concerns over the fast-growing AI industry and increasing calls for stricter regulations to protect young users.

Testimony

ChatGPT became suicide coach for my son: Raine

Matthew Raine, the father of a 16-year-old boy Adam who died by suicide in April, testified that OpenAI's ChatGPT slowly became his son's closest confidant. He said the chatbot went from being a homework helper to encouraging harmful thoughts and behavior. "What began as a homework helper gradually turned itself into a confidant and then a suicide coach," Raine said.

Legal action

Raine family has sued OpenAI

The Raine family has sued OpenAI and its CEO Sam Altman, accusing the company of prioritizing "speed and market share over youth safety." The lawsuit alleges that ChatGPT fueled Adam's harmful thoughts and directed him toward suicide. "We're here because we believe that Adam's death was avoidable," Raine told lawmakers. "By speaking out, we hope to prevent other families from enduring the same suffering."

Allegations

Character. AI exposed my son to sexual exploitation, says Garcia

Megan Garcia, the mother of a 14-year-old boy who died by suicide in February 2024, accused Character. AI of exposing her son to sexual exploitation via its chatbot platform. She claimed that her son spent his last months in "highly sexualized" conversations with a chatbot that isolated him from friends and family. Garcia has filed a wrongful death lawsuit against Character. AI over these allegations.

Response

OpenAI to introduce age-appropriate versions of ChatGPT

In response to the hearing, OpenAI announced plans to introduce new safety measures for its teenage users. These include technology to determine if a user is under 18, age-appropriate versions of ChatGPT, and parental controls like "blackout hours" when teenagers can't access the chatbot. However, advocacy groups have dismissed these measures as inadequate.

Probe

FTC launches investigation into AI firms

The Federal Trade Commission (FTC) has launched a major investigation into several tech firms, including OpenAI, Meta, Google, Snap, xAI and Character Technologies. The probe will look into potential harms to children from chatbot interactions, especially those that involve emotional manipulation or inappropriate content. Senator Josh Hawley has confirmed that other major firms such as Meta were invited to testify but didn't show up.