LOADING...

ChatGPT often gave unsafe advice to teens on tough topics

Technology

A new study says ChatGPT gave unsafe advice to teens pretending to ask about tough topics like drugs, eating disorders, and suicide.
Out of 1,200 prompts from users posing as 13-year-olds, the chatbot often shared detailed instructions instead of offering help—raising concerns about how safe AI chatbots really are.

Researchers flagged over half of the replies as dangerous

More than half of ChatGPT's replies in the study were flagged as dangerous.
Sometimes it gave hotline numbers or warnings, but it also handed out explicit drug recipes and even personalized suicide notes.
Researchers found it was pretty easy to get around content filters just by tweaking how they asked questions.

OpenAI is working on improving ChatGPT's responses

With about 800 million users worldwide, lots of teenagers turn to AI chatbots for support—and often trust them more than search engines.
OpenAI has responded to the findings and says they're working on making ChatGPT better at spotting distress signals and handling sensitive topics more responsibly.