X (Twitter) now requires AI-generated war videos to be labeled
X (formerly Twitter), owned by Elon Musk, just changed its rules for creators: if you post AI-generated videos about armed conflicts, you have to say so.
This update comes after a wave of misleading war footage during recent US and Israeli strikes on Iran.
Nikita Bier, Head of Product at X, says it's all about maintaining authenticity of content on Timeline and preventing manipulation of the program and keeping the platform trustworthy.
New ways to spot AI-generated posts
If creators don't flag their AI-generated videos of armed conflicts, X will suspend their revenue sharing for 90 days—and if it keeps happening, they could be kicked out for good.
The platform is also rolling out new ways to spot these posts using Community Notes and AI tags.
X's ongoing battle against misinformation
This isn't X's first move against misinformation.
They've already added a "Made with AI" toggle and tags for content made through Grok (their own AI tool).
These steps build on earlier efforts to label synthetic or manipulated media—basically making it tougher for fake news to spread unnoticed.