LOADING...

AI voice cloning is supercharging extremist propaganda: Study

Technology

Extremist groups are now using AI voice cloning to make their propaganda more convincing and far-reaching.
Recent research by ISD and GNet found that they're creating lifelike audio versions of notorious speeches and manuals—like English translations of Hitler's speeches and the audiobook "Siege"—and sharing them online to grab attention.

Why does this matter right now?

These AI-generated audios have racked up tens of millions of views on platforms like TikTok, YouTube, and Instagram.
Because audio feels more personal than plain text, it can draw people in emotionally, making it easier for extremist groups to spread misinformation and recruit followers.

How are they pulling this off?

Using commercial tools like ElevenLabs, extremists can clone voices from just a few audio clips.
They then add subtitles or translations with AI, package everything into slick videos or audiobooks, and share them across mainstream apps and encrypted channels—making their content accessible worldwide.

What's being done about it?

YouTube is rolling out new rules requiring creators to disclose when content is AI-generated.
Tech firms are also working on tools like watermarking and voice ID checks.
Meanwhile, experts have warned that unchecked generative AI could make radicalization even easier if we're not careful.