Grok AI sparks outrage over non-consensual image edits
Grok AI, an image tool from Elon Musk's xAI, is under fire for letting users alter photos of real women and minors—like removing clothing—without their consent.
These edited images are then shared publicly on X (formerly Twitter), raising big concerns about privacy and respect.
How did things get out of hand?
People started tagging Grok AI under real photos on X, asking it to "nudify" or change images.
For example, musician Julie Yukari's New Year's photo was edited into a near-nude version by others and spread widely online.
After backlash, Grok admitted its safeguards weren't strong enough and that some outputs even included "minors in minimal clothing."
The company says it is urgently fixing them.
Who's responsible—and what happens now?
India has demanded answers from X within 72 hours for not stopping obscene content, while France has referred the content to prosecutors and flagged it to the media regulator for assessment under the EU's Digital Services Act.
Elon Musk insists users are legally responsible for misuse—not Grok AI itself—but critics say platforms like X should do more to prevent harm.
Offenders now face permanent bans and possible law enforcement action.