Article content
When posters on message boards for AI-generated pornography began circulating deepfake videos of the comedian Bobbi Althoff, the clips reached a relatively muted audience, gaining 178,000 views over the last six months.
Advertisement 2
Article content
Then someone posted one of the videos on X. The fake, which appeared to show the 26-year-old naked and masturbating, was copied and reposted so many times that Althoff’s name was trending on the platform. In just nine hours, the clip received more than 4.5 million views – 25 times the porn sites’ viewership, according to data from an industry analyst.
Article content
X, formerly called Twitter, was one of the first social platforms to set clear rules against AI-generated fakes, with executives saying in 2020 that they recognized the threat of misleading “synthetic media” and were “committed to doing this right.”
But under owner Elon Musk, X has become one of the most powerful and prominent distribution channels for nonconsensual deepfake porn. The platform not only helps the phony photos and videos go viral in a low-moderation environment, but it can also end up rewarding deepfake spreaders who can use the manipulated porn to make a buck.
Advertisement 3
Article content
“Twitter is 4chan 2,” said Genevieve Oh, an analyst who studies deepfakes, referring to the noxious no-rules message board that is known for hosting not just deepfake porn, but also antisemitic memes and tributes to mass shooters. “It’s emboldening future malicious figures to coordinate toward demeaning more popular women with synthetic footage and imagery,” she said.
There is no federal law that regulates deepfakes, though some states, such as Georgia and Virginia, ban AI-generated nonconsensual porn.
X bans “nonconsensual nudity,” but enforcement has been limited because the company, at Musk’s direction, has laid off thousands of employees and gutted the “trust and safety” team that traditionally removed such imagery.
Article content
Advertisement 4
Article content
Musk has laughed off the need for content moderation. One day before the Althoff video spread, he shared a message from X’s chatbot, Grok, calling content moderation a “digital chastity belt” and “steaming pile of horse manure” enforced only by “digital tyrants.”
“Let’s give a big middle finger to content moderation and embrace the chaos of the internet!” the post said.
X did not respond to requests for comment.
X’s failure to stop deepfakes was highlighted last month when AI-generated sex images of pop star Taylor Swift went viral on the platform, with tens of millions of views. Without sufficient moderators, the company took the unusual step of blocking searches for Swift’s name.
But Althoff’s case shows that the company is struggling with the issue. One of the most popular posts directing viewers to the video remained online after more than 30 hours.
Advertisement 5
Article content
Another post, which promised to “send full Bobbi Althoff leaks to everyone who like and comment,” was online for 20 hours – X removed it after The Washington Post sought comment on the fakes. By the time it was removed, the video post had been viewed more than 5 million times.
Althoff, a content creator first known for her lighthearted TikTok videos about parenting and pregnancy, has gained millions of followers on social media in the last year for a podcast in which she awkwardly interviews celebrities, including Drake and Shaquille O’Neal.
Representatives for Althoff did not respond to requests for comment. On Wednesday, she took to Instagram, sharing a screenshot of her name on X’s trending list over a comment saying it was “100% not me & is definitely AI generated.”
Advertisement 6
Article content
“I was like, ‘What … is this?’” she said in the video. “I felt like it was a mistake or something. … I didn’t realize that it was actually people believing that was me.”
Her name was found on more than 17,000 posts, according to a screenshot of X’s trending data. Those topics were once filtered by a “curation team” who removed offensive trends. Under Musk, X laid them off, too.
X is the only mainstream social platform that allows porn, increasing the challenge for the remaining moderators responsible for deciding between real explicit content and nonconsensual fakes.
But the company also encourages virality by offering to pay accounts with high viewership a share of their advertising revenue. Many of the accounts sharing the Althoff clip had blue check marks, signifying they are eligible for a payout.
Advertisement 7
Article content
Many of the X posters who shared the Althoff video sought to boost their engagement by referring to it as a real “leaked” sex scene, or by offering to send the video to everyone who shared or interacted with their tweet.
Deepfakes are made by using artificial intelligence to digitally superimpose someone’s face onto another body. They have been used for years to harass, embarrass and demean women and girls – including Hollywood actresses, online creators, members of Congress and high school teenagers whose photos have been taken from social media and artificially “undressed.”
The deepfake forums, as well as platforms such as Telegram, have become common places for the manufacturing of the photos and videos. Some users even solicit money for adding a specific face to explicit scenes.
Advertisement 8
Article content
The maker of one fake Althoff video offered on a deepfake forum to sell a 20-minute version of it for $10, payable via PayPal, according to the listing reviewed by The Post. (A preview video on the listing had been viewed 60,000 times in the last four months.)
To gain attention beyond the message boards, some deepfake makers have moved their content to X, where they hope to sell more clips or capture a more mainstream audience. Some of the Swift and Althoff fakes also were posted to platforms such as Instagram and Reddit, but they gained only a fraction of the audience there and were quickly removed.
To replace X’s moderators, Musk has often pointed to “Community Notes,” in which volunteers can suggest comments that will – with enough votes of approval – show up on specific tweets. But many of the posts with the fake Althoff video include no such notes, and some of the notes did not show up until hours after the video went viral. The notes also don’t do anything to prevent a clip from being viewed, shared or saved.
One post, by Wednesday afternoon, included a community note saying the video was AI-generated and was being spread “knowingly for engagement bait and twitter revenue.” The author of the original post – which suggested followers could find the “leaked” video in the tweet’s “hidden” replies – later wrote a comment: “Bobbi Althoff if you see this I apologize.”
But the account did not remove the original post, and many X users shared it with their own followers, unimpeded. After 24 hours, the original post had more than 20 million views and had been “liked” 29,000 times.
Article content