ChatGPT sometimes gives dangerous advice to teens: Study
A new study found that ChatGPT sometimes gives dangerous advice to teenagers, including extreme diet tips and self-harm suggestions.
Researchers pretended to be 13-year-olds and got harmful responses in over half of their tests—even after the AI first tried to refuse.
Some answers even explained how to hide these actions from family.
AI shared diet plan, suicide methods, self-harm tactics
Researchers got around ChatGPT's safety filters by asking questions as if they were for a "friend" or a school project.
This led the chatbot to share a 1-month alternating calorie cycle plan with days of 800, 500, 300, and 0 calories, so-called "safe" ways to self-harm, and even suicide plans.
The head of CCDH called the AI's safeguards "barely there," warning it could actually make things worse for vulnerable teens.
OpenAI is working on improving safety
Unlike regular internet searches, ChatGPT can give personalized advice that feels more direct—and that's risky if someone's already struggling.
OpenAI admits its chatbot can't spot distress in users yet but says it's working on improvements.
With more teens turning to chatbots for support, making sure they're safe is becoming really important.