India mandates labeling AI-generated content, deepfakes
India just updated its tech laws to crack down on deepfakes and AI-generated content.
Under the MeitY amendments, intermediaries that enable creation of "synthetically generated information" (SGI), as defined by those amendments, and Significant Social Media Intermediaries (SSMIs, platforms with >5 million users) must clearly label SGI—think visible tags or hidden metadata—so users can distinguish synthetic from genuine content.
Platforms have to check if user claims are true
Big platforms must get users to declare if their content is AI-made, double-check those claims with tech tools, and quickly take down harmful deepfakes (within 36 hours if flagged, or just 3 hours during elections).
Political parties also have to act fast when fake stuff pops up around voting time.
Concerns about the rules being too broad
Some industry groups worry the rules might be too broad and could lump harmless edits in with actual fakes. They're asking for a focus on truly misleading content.
Still, India's move lines up with global trends—like the EU's AI Act—to keep digital spaces safer and includes special rules for SGI during elections.