YouTube will now protect celebrities from misuse of their identities
What's the story
YouTube has announced the expansion of its "likeness detection" technology to the entertainment industry. The innovative system identifies AI-generated content, including deepfakes, and works like YouTube's existing Content ID system. The main goal is to protect creators and public figures from having their identities misused without consent, a common issue for celebrities who often find their likenesses used in fraudulent ads.
Tech evolution
Major talent agencies have supported the tool
The likeness detection technology was first tested with a select group of YouTube creators in a pilot program last year. It was then expanded to include politicians, government officials, and journalists this spring. Now, the tech is being made available to those in the entertainment industry such as talent agencies and management companies. Major agencies such as CAA, UTA, WME, and Untitled Management have also supported this new tool by providing feedback.
Functionality
How does the likeness detection tool work?
The likeness detection tool scans for AI-generated content to find visual matches of an enrolled participant's face. Users can then choose to have the video removed for privacy policy violations, file a request for copyright removal, or do nothing at all. However, YouTube clarifies that it won't remove all content as parody and satire content is allowed under its rules.
Expansion plans
YouTube is also pushing for federal legislation
YouTube also plans to expand the likeness detection technology to audio content in the future. The company has been pushing for similar protections at a federal level, supporting the NO FAKES Act in Washington, D.C. This legislation shall regulate the use of AI to create unauthorized re-creations of an individual's voice and visual likeness.