Next Article
Google pulls AI tool Gemma after false claims about US senator
Technology
Google has taken down its AI tool, Gemma, after it falsely accused US Senator Marsha Blackburn of rape—complete with made-up news links.
The incident occurred in late October and quickly sparked concern about how easily AI can generate convincing but totally fake stories.
Why it matters: accountability and legal questions
After Blackburn demanded action, Google removed Gemma from its developer platform and admitted that "hallucinations" (AI making stuff up) are a known problem.
This isn't the first time AI-generated misinformation has caused trouble—there's already a lawsuit over similar issues.
Now, bigger questions are being asked about who's responsible when AI spreads false information, and whether current laws are enough to handle these new challenges.