YouTube's deepfake detection tool now lets politicians flag fakes
YouTube just expanded its AI-powered deepfake detection tool to include government officials, political candidates, and journalists.
Announced today, YouTube said the update lets verified users flag videos that fake their face, so YouTube can review and remove them if they break privacy rules.
How the process works
If you're covered by the tool, you upload a selfie and your government ID for biometric verification.
You'll see flagged videos that match your likeness and can ask for takedowns.
YouTube checks each request based on its privacy guidelines (though parody or critique content is usually allowed) and adds labels to sensitive videos.
Why it matters
With AI-made fakes spreading fast online, especially during elections or news events, this move helps protect public figures from misinformation.
CEO Neal Mohan has emphasized the importance of tackling AI slop and addressing deepfakes.
The technology itself launched last year (2025) to roughly 4 million YouTube creators in the YouTube Partner Program.
YouTube says it wants to keep conversations real while fighting low-quality synthetic content.