YouTube now lets politicians, journalists remove their deepfakes
What's the story
YouTube is expanding its likeness detection technology, a tool that identifies AI-generated deepfakes, to a pilot group of government officials, political candidates, and journalists. The move comes as part of the platform's effort to combat misinformation and protect public figures from potential misuse of their likenesses. The new feature will enable these individuals to detect unauthorized AI-generated content and request its removal if it violates YouTube policy.
Tech details
How does the likeness detection technology work?
The likeness detection technology, which was launched last year for around four million YouTube creators in the YouTube Partner Program, works like the existing Content ID system. Content ID detects copyright-protected material in users' uploaded videos. Likeness detection looks for simulated faces created with AI tools. These tools can be misused to spread misinformation and manipulate people's perception of reality by using deepfaked personas of public figures such as politicians or other government officials.
Policy balance
Striking a balance between free expression and AI dangers
The new pilot program seeks to strike a balance between users' free expression and the dangers of AI technology that can create a convincing likeness of a public figure. "This expansion is really about the integrity of the public conversation," said Leslie Miller, YouTube's Vice President of Government Affairs and Public Policy. She emphasized that while they are providing this new shield against AI impersonation, they are also being careful about how it's used.
Removal policy
Not all detected matches will be removed
Miller clarified that not all detected matches would be removed upon request. YouTube will evaluate each request under its existing privacy policy guidelines to determine if the content is parody or political critique, which are protected forms of free expression. The company is also advocating for these protections at a federal level with its support for the NO FAKES Act in D.C., which would regulate AI use to create unauthorized recreations of an individual's voice and visual likeness.
User guide
Here's how public figures can use the new tool
To use the new tool, eligible pilot testers have to first prove their identity by uploading a selfie and government ID. They can then create a profile, view matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or possibly allow them to monetize those videos like its Content ID system does.
Content labeling
YouTube's struggle with AI-generated content
YouTube has been struggling with how to handle AI-generated content, including a wave of AI soundalike music mimicking real artists. The company labels these videos as such, but the placement isn't consistent. For some, the label appears in the description while others focused on more "sensitive topics" will apply it at the front of the video.