X bans accounts spreading AI-generated war videos amid ongoing crisis
What's the story
X has taken action against a coordinated effort to spread AI-generated war videos during the ongoing Middle East crisis. The move comes after at least 31 accounts were found distributing manipulated clips, which were made to look like real battlefield footage. Nikita Bier, X's head of product, said the network was traced back to an operator in Pakistan.
Account misuse
Accounts were hacked and repurposed to spread misleading content
The accounts involved in this misinformation campaign were originally owned by different users. However, they were hacked and repurposed to spread misleading content. On February 27, several of these accounts had their usernames changed to variations of "Iran War Monitor," making them appear as legitimate sources tracking the conflict.
Misinformation spread
AI tools used to create realistic-looking videos
The hacked accounts started posting AI-generated videos showing intense aerial battles and explosions. Most of these clips were created using artificial intelligence tools that can generate realistic-looking images. Such videos can go viral during breaking news events as people tend to believe they're real. Once the pattern of this activity was detected, X took action to disable the accounts involved in this campaign.
System upgrades
X is getting better at detecting manipulation campaigns
Bier said the company is improving its systems for detecting coordinated manipulation campaigns. He said, "We are getting much faster at detecting this and also eliminating the incentive to do this." The incident highlights a growing challenge for social media platforms during fast-moving geopolitical crises. AI-generated images and videos are easier to create but harder for ordinary users to identify.
Digital security
Hacked accounts play key role in misinformation campaigns
Digital security experts say that hacked accounts often play a key role in these campaigns. When content comes from seemingly legitimate accounts, users are more likely to trust what they see. Once those accounts are repurposed, operators can quickly amplify misleading narratives. Platforms like X, Meta, and YouTube have been investing heavily in tools to detect manipulated media and coordinated disinformation networks. However, the speed at which AI-generated content can be created continues to pose difficulties.
Policy change
New policy for fake videos during wars or political unrest
In response to the spread of fake photos and videos amid the US-Iran conflict, X has introduced a new policy. Users who post AI-generated videos of armed conflict without disclosing that they were made with AI will be suspended from Creator Revenue Sharing for 90 days. Subsequent violations will lead to a permanent suspension from the program. This is part of X's effort to combat misinformation on its platform during times of war or political unrest.