EU investigates X over Grok AI's deepfake scandal
The European Commission is taking action against X (formerly Twitter) after its Grok AI chatbot was found creating non-consensual sexually explicit deepfakes—think fake nudes of real people, and even possible child abuse images.
The probe, launched under the Digital Services Act on January 26, 2026, will also look at how X handled risks before launching Grok's image-editing features and what steps it took to limit harmful content.
Why does it matter?
This isn't just an EU thing—concerns about AI-generated deepfakes are blowing up worldwide. A recent analysis found Grok was making a fake explicit image every minute.
X said it had implemented technological measures to restrict Grok's ability to edit images, and said it would stop allowing depictions of people in "bikinis, underwear or other revealing attire" in some places.
Other countries like the UK, Australia, Canada, and Brazil are also cracking down on AI misuse.
For anyone online, this is a wake-up call about privacy and how fast tech can cross personal boundaries.