India mandates clear labeling of AI-generated content
India just dropped new IT rules aimed at stopping the spread of AI-generated fakes and deepfakes online.
Notified on February 10, the rules come into force on February 20, giving platforms just 10 days to update their systems and clearly label any AI-made images or videos, making it easier for everyone to spot what's real and what's not.
New rules require platforms to quickly identify and block illegal content
Apps like WhatsApp and Instagram now need tech that can quickly find and block illegal synthetic content, especially anything harmful or misleading.
If obscene AI material containing nudity or obscene acts is reported by a user whose own identity or that of someone else has been compromised, it has to be taken down within two hours.
While these rules are meant to keep things safer online, some experts worry they might be tough for smaller platforms to handle—and could even impact free speech if not managed carefully.