Next Article
Schools are monitoring students' AI interactions for self-harm risks
Technology
More schools are now using tools like Gaggle and GoGuardian to keep an eye on what students say to AI chatbots, hoping to spot signs of self-harm or risky behavior.
These systems use language tech to flag worrying messages, which real people then review.
Controversies surrounding the monitoring
Character.ai leads in flagged interactions (45.9%), with ChatGPT next at 37%.
Groups like the Electronic Frontier Foundation say these tools sometimes flag normal LGBTQ behavior as inappropriate or reportable.
Research also suggests this kind of monitoring can actually stop teens from reaching out for help when they need it.