In brief
- X’s product head, Nikita Bier, said creators posting undisclosed AI-generated war videos will lose access to the platform’s revenue-sharing program for 90 days.
- The policy targets AI-generated footage that could mislead users during wartime.
- Researchers and governments have warned that deepfakes could spread propaganda and misinformation online.
Elon Musk’s social media platform X said it will suspend creators from its revenue-sharing program if they post AI-generated videos depicting armed conflict without clearly disclosing that the footage was created using artificial intelligence.
In a post on Tuesday, X’s head of product Nikita Bier said the company is revising its Creator Revenue Sharing policies to maintain authenticity on the platform’s timeline and “prevent manipulation of the program.”
“During times of war, it is critical that people have access to authentic information on the ground,” Bier wrote. “With today’s AI technologies, it is trivial to create content that can mislead people.”
Creators who violate the rule will lose access to the platform’s Creator Revenue Sharing program for 90 days, Bier wrote. Repeat violations will lead to permanent removal from the monetization program.
The policy change comes as AI-generated videos claiming to show scenes of escalating violence in the Middle East following missile strikes by the U.S., Israel, and Iran last week.
On Monday, an AI-generated clip on X showing an airstrike on the Burj Khalifa in Dubai was viewed over 8 million times; at the same time, another version of the clip was viewed over 42,000 times on Instagram.
The United Nations has warned that deepfakes and AI-generated media threaten information integrity, particularly in conflict zones where fabricated images or videos can spread hate or misinformation at scale.
This concern was realized during Russia’s invasion of Ukraine, a deepfake video circulated online appearing to show Ukrainian President Volodymyr Zelensky urging Ukrainian troops to surrender. Officials quickly debunked the video, and Zelensky later released a message rejecting the claim.
According to Bier, enforcement will rely on several signals, including posts that receive a Community Note identifying the video as AI-generated, along with metadata or other indicators suggesting the footage was produced using generative AI tools.
By tying enforcement to monetization, X’s policy focuses specifically on the financial incentives creators have to post fake videos that drive clicks and views.
“We will continue to refine our policies and product to ensure X can be trusted during these critical moments,” Bier wrote.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.

