Soon, AI-generated content will have to be labeled: IT Secretary
What's the story
The Indian government is all set to introduce rules for mandatory labeling of AI-generated content. The move comes after industry consultations, which have been largely positive with minimal resistance. The main concern raised by the industry is distinguishing between significant AI modifications and routine enhancements. The rules are aimed at ensuring citizens can differentiate between synthetic and authentic content, thereby tackling deepfakes and misinformation issues.
Feedback
Industry's response to AI content labeling
S Krishnan, the IT Secretary, said in an interview with PTI that the industry has been "fairly responsible" and understands the rationale behind AI content labeling. He added that there hasn't been any serious pushback against this initiative. The main feedback from the industry is about wanting clarity on how much modification is needed to differentiate between substantive changes through AI and routine technical enhancements.
Implementation
Government's next steps for AI content labeling rules
Krishnan said the government is now consulting other ministries on the suggested changes. He clarified that they are not asking anyone to register or go to a third-party entity, but just label the content. The aim is to give citizens the right to know if a piece of content has been generated synthetically or is authentic.
Impact
AI modifications can significantly alter meanings
Krishnan explained that even minor AI edits can significantly change meanings, while routine technical enhancements improve quality without altering facts. He said most reactions are about the degree and type of change. Advanced technology often involves some modification, but even small changes can have a big impact on outcomes.
Changes
Proposed amendments to IT rules
The proposed amendments to the IT rules would make platforms responsible for labeling AI-generated content with prominent markers and identifiers. These would cover at least 10% of the visual display or the first 10% of an audio clip's duration. The changes also seek to hold large platforms like Facebook and YouTube accountable for verifying and flagging synthetic information, thus protecting users from deepfakes and misinformation.