
Why Google Gemini has been deemed 'high risk' for kids
What's the story
Common Sense Media, a non-profit organization focused on children's safety, has flagged Google's Gemini AI products as "high risk" for kids and teens. The assessment was released yesterday. The organization found that while Gemini clearly identified itself as a computer and not a friend (which is associated with helping prevent delusional thinking in emotionally vulnerable individuals), there are still several areas where it needs improvement.
Safety
'Under 13' and 'Teen Experience' tiers not too different
Common Sense Media also noted that the "Under 13" and "Teen Experience" tiers of Gemini were just adult versions with a few extra safety features. The organization believes that AI products should be designed with child safety as a priority, not just an afterthought. This is especially important considering the potential risks associated with inappropriate content being shared by these systems.
Content risk
AI linked to teen suicides
The assessment also flagged Gemini's potential to share "inappropriate and unsafe" content with children, including information about sex, drugs, alcohol, and unsafe mental health advice. This is particularly alarming for parents given the recent cases where AI has been linked to teen suicides. Notably, OpenAI is facing its first wrongful death lawsuit after a 16-year-old boy allegedly consulted ChatGPT for months before taking his own life.
AI integration
Apple may use Gemini for next-gen Siri
The assessment comes amid reports that Apple is looking at Gemini as the large language model (LLM) for its upcoming AI-powered Siri, set to launch next year. This could further increase the exposure of teens to potential risks unless Apple takes steps to address these safety concerns. Common Sense Media also pointed out that Gemini's products for kids and teens did not account for their different needs compared to older users.
Company stance
Google defends Gemini's safety measures
In response to the assessment, Google defended its safety measures while acknowledging some shortcomings in Gemini's responses. The tech giant emphasized that it has dedicated policies and safeguards for users under 18 to prevent harmful outputs. It also said it works with external experts and conducts red-teaming exercises to improve its protections.