News

Why AI conspiracy videos are spamming social media

Stay informed with free updates

In a viral TikTok video, celebrity podcaster Joe Rogan appears to deliver a damning message: “We are all probably going to die in the next few years. Did you hear about this? There’s this asteroid that is on a collision course with Earth.” This information was kept top secret by the state, he insists, but then leaked by a US agency worker named Jonathan Brown. 

In fact, the video is a sham, experts say. While the images are of Rogan, the audio appears to be a fake artificial intelligence-generated clone of his voice, according to non-profit Media Matters. And the asteroid covered up by the government? A baseless conspiracy theory. There is nothing to suggest that Rogan was knowingly involved in the clip.

The post, which was eventually taken down by TikTok, is part of an emerging phenomenon whereby online creators are pumping out conspiracy-laden videos across social platforms, often with the aid of new AI tools. Garnering millions of views, the conspiracies range from the lightly fantastical to the totally bonkers — that the CIA has hidden a mythical “hellhound” in a cave in the Grand Canyon, for example — and tend to follow a distinctive pattern. 

“You start with an unhinged statement to grab attention — a viral hook,” says Abbie Richards, senior video producer at Media Matters, who has been tracking the conspiracy material, which is usually enhanced or entirely created by AI. “Then you have a back-story with a fake character who is usually very rugged and then they discover some kind of secret.”

A TikTok spokesperson said that conspiracy theories “are not eligible to earn money” and that harmful misinformation was prohibited on the app.

The perpetrators of such content are not typically spreading it because they are trying to manipulate opinion. This is a for-profit endeavour to juice engagement in order to get paid by the platforms, many of which reward creators financially for high view counts. 

“AI-generated conspiracy theory content to make money is the perfect distillation of the moment [where] we are in the internet ecosystem right now,” says Dr Jen Golbeck, a professor at the University of Maryland, College Park, who focuses on social media and conspiracies.

Beyond the cash cow, the genre taps into the increasing draw of such theories amid growing mistrust in governments, and the mainstreaming of conspiratorial narratives in recent years, she says. “That combines with this algorithmic pushing of the most engaging content [by the social media platforms], which drives us to see things which are more extreme, more novel,” she says. 

Conspiracy theories are not the only spammy use cases emerging from the AI boom. A report published this week by Stanford and Georgetown University researchers found more than a hundred AI-generated spam pages flooding Facebook feeds with sometimes hyper-realistic, sometimes bizarre art (one example of the latter is “Shrimp Jesus”, a Jesus whose body is made of shrimps). Rewarded by the Meta app’s algorithms and racking up millions of views, the pages then attempt to direct viewers off the platform to sites that might sell dodgy products, for example.

Josh A Goldstein, a research fellow at Georgetown’s Center for Security and Emerging Technology and co-author of the report, warns that the success of such content could mean that nefarious actors will “use AI images to build up” followings before pivoting towards spreading election-related disinformation, for example. 

Either way, it is clear that social media platforms should create and enforce more clearly defined rules around the use of AI on their apps. Introducing labelling of AI-generated content is a first step. Platforms must also adjust their creator payout programmes to ensure influencers are not incentivised to game the system. TikTok, which faces a potential ban in the US, said it had this week introduced a new creator rewards programme focused on original, high quality content.

“This is the decades-old problem, which is that the business model of social media is the attention economy,” says Hany Farid, professor in digital forensics at the University of California, Berkeley. Add to this AI without guardrails, and the internet becomes far more “dystopian”, he continues. Act now, or “we are the monkeys . . . sitting there swiping while our AI overlords” watch on.

hannah.murphy@ft.com

Articles You May Like

There is a new twist in the TikTok tale
Proposed Houston budget puts off recurring fiscal fix
Arup lost $25mn in Hong Kong deepfake video conference scam
China-Russia: an economic ‘friendship’ that could rattle the world
Top dealmaker set to leave law firm Skadden Arps as talent war escalates