AI-made videos showing fake celebrations in Venezuela have exploded online after Nicolas Maduro was removed from power.
These clips, which are completely fabricated using artificial intelligence, show large crowds cheering, waving flags, and thanking President Donald Trump and the U.S. military. The problem though is none of it actually happened.
One of the first and most viral clips was posted by an X account called Wall Street Apes, which has over 1 million followers, and it shows Venezuelan citizens crying, hugging, and praising the U.S. for taking down Maduro.
The post was flagged by community notes, but only after 38,000 people reposted it and 5.6 million had viewed, and it even ended up on Elon Musk’s feed before the user deleted it. The note added this warning:
“This video is AI generated and is currently being presented as a factual statement intended to mislead people.”
Fact-checkers at BBC and AFP traced the original video to a TikTok account called @curiousmindusa, which posts AI-generated clips regularly.
While we couldn’t find the exact source, both newsrooms confirmed that was the earliest version. The clip appeared after a major military operation on January 3 when U.S. forces launched airstrikes and a ground raid that led to Maduro’s arrest.
Images of him in custody had already been spreading online before the government even released an official one, and those early ones were all fake too.
AFP also flagged more misleading content. One clip, which looked like a street party in Caracas, turned out to be an old video from Chile after it was passed off as if Venezuelans were out celebrating in the streets.
This isn’t the first time AI has been used to distort reality during a breaking story, and it definitely won’t be the last. Similar patterns showed up during both the Russia-Ukraine and Israeli-Palestinian conflicts. But the sheer speed and realism of the Venezuela fakes are what make them different this time.
Platforms like Sora and Midjourney have made it stupidly easy to crank out fake clips in minutes, and people keep falling for them.
The creators behind these clips often try to push certain political narratives or just stir chaos online. And it’s working. Last year, an AI-made video showed fake women crying about losing their SNAP benefits during a U.S. government shutdown. That clip was aired by Fox News as real, until they had to pull it.
All of this has triggered louder demands for social platforms to label AI-generated stuff more clearly. India even proposed a law requiring labels on AI content, and Spain approved €35 million fines for anything unlabeled.
Some sites are trying to catch up. TikTok and Meta say they’ve built tools to detect and tag AI videos. CNBC found a few fake Venezuela clips on TikTok that were marked correctly. But the results are mixed.
X, on the other hand, leans mostly on its community notes system. Critics say it doesn’t work fast enough. By the time the AI warning shows up, millions of people have already watched and shared the content.
Even platform heads are sounding the alarm. Adam Mosseri, who runs Instagram and Threads, posted, “All the major platforms will do good work identifying AI content, but they will get worse at it over time as AI gets better at imitating reality.”
Adam added, “There is already a growing number of people who believe, as I do, that it will be more practical to fingerprint real media than fake media.”
Sharpen your strategy with mentorship + daily ideas - 30 days free access to our trading program