India rejects tech firms' plea to soften AI content rules
India has turned down Google, Meta, and other tech firms' push to soften strict new rules on AI-generated content.
In a short meeting, officials made it clear they're sticking with the updated IT Rules—now requiring companies to remove unlawful or government-flagged content within three hours (down from 36), and the rules separately impose obligations for synthetically generated information (SGI), and in certain cases such as non-consensual intimate imagery or specified deepfakes the deadline may be shortened to two hours.
What's the concern?
These rules mean big changes for social media platforms: they lose their legal protection ("safe harbor") if they don't quickly block harmful material like deepfakes or child abuse content.
But because "government-flagged content" is vaguely defined, there's real worry this could lead to over-policing memes, parodies, or even news.
The reporting and other compliance requirements may also make things tougher for creators and platforms alike—especially since the government isn't budging on deadlines.