The circulation of explicit and pornographic pictures of megastar Taylor Swift this week shined a light on artificial intelligence’s ability to create convincingly real, damaging – and fake – images.
But the concept is far from new: People have weaponized this type of technology against women and girls for years. And with the rise and increased access to AI tools, experts say it’s about to get a whole lot worse, for everyone from school-age children to adults.
Already, some high schools students across the world, from New Jersey to Spain, have reported their faces were manipulated by AI and shared online by classmates. Meanwhile, a young well-known female Twitch streamer discovered her likeness was being used in a fake, explicit pornographic video that spread quickly throughout the gaming community.
“It’s not just celebrities [targeted],” said Danielle Citron, a professor at the University of Virginia School of Law. “It’s everyday people. It’s nurses, art and law students, teachers and journalists. We’ve seen stories about how this impacts high school students and people in the military. It affects everybody.”
But while the practice isn’t new, Swift being targeted could bring more attention to the growing issues around AI-generated imagery. Her enormous contingent of loyal “Swifties” expressed their outrage on social media this week, bringing the issue to the forefront. In 2022, a Ticketmaster meltdown ahead of her Eras Tour concert sparked rage online, leading to several legislative efforts to crack down on consumer-unfriendly ticketing policies.
“This is an interesting moment because Taylor Swift is so beloved,” Citron said. “People may be paying attention more because it’s someone generally admired who has a cultural force. … It’s a reckoning moment.”
‘Nefarious reasons without enough guardrails’
The fake images of Taylor Swift predominantly spread on social media site X, previously known as Twitter. The photos – which show the singer in sexually suggestive and explicit positions – were viewed tens of millions of times before being removed from social platforms. But nothing on the internet is truly gone forever, and they will undoubtedly continue to be shared on other, less regulated channels.
Although stark warnings have circulated about how misleading AI-generated images and videos could be used to derail presidential elections and head up disinformation efforts, there’s been less public discourse on how women’s faces have been manipulated, without their consent, into often aggressive pornographic videos and photographs.
The growing trend is the AI equivalent of a practice known as “revenge porn.” And it’s becoming increasingly hard to determine if the photos and videos are authentic.
What’s different this time, however, is that Swift’s loyal fan base banded together to use the reporting tools to effectively take the posts down. “So many people engaged in that effort, but most victims only have themselves,” Citron said.
Although it reportedly took 17 hours for X to take down the photos, many manipulated images remain posted on social media sites. According to Ben Decker, who runs Memetica, a digital investigations agency, social media companies “don’t really have effective plans in place to necessarily monitor the content.”
Like most major social media platforms, X’s policies ban the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.” But at the same time, X has largely gutted its content moderation team and relies on automated systems and user reporting. (In the EU, X is currently being investigated over its content moderation practices).
The company did not respond to CNN’s request for comment.
Other social media companies also have reduced their content moderations teams. Meta, for example, made cuts to its teams that tackle disinformation and co-ordinated troll and harassment campaigns on its platforms, people with direct knowledge of the situation told CNN, raising concerns ahead of the pivotal 2024 elections in the US and around the world.
Decker said what happened to Swift is a “prime example of the ways in which AI is being unleashed for a lot of nefarious reasons without enough guardrails in place to protect the public square.”
When asked about the images on Friday, White House press secretary Karine Jean-Pierre said: “It is alarming. We are alarmed by the reports of the circulation of images that you just laid out – false images, to be more exact, and it is alarming.”
A growing trend
Although this technology has been available for a while now, it is getting renewed attention now because of the offending photos of Swift.
Last year, a New Jersey high school student launched a campaign for federal legislation to address AI generated pornographic images after she said photos of her and 30 other female classmates were manipulated and possibly shared online.
Francesca Mani, a student at Westfield High School, expressed frustration over the lack of legal recourse to protect victims of AI-generated pornography. Her mother told CNN it appeared “a boy or some boys” in the community created the images without the girls’ consent.
“All school districts are grappling with the challenges and impact of artificial intelligence and other technology available to students at any time and anywhere,” Westfield Superintendent Dr. Raymond González told CNN in a statement at the time.
In February 2023, a similar issue hit the gaming community when a high-profile male video game streamer on the popular platform Twitch was caught looking at deepfake videos of some of his female Twitch streaming colleagues. The Twitch streamer “Sweet Anita ” later told CNN it is “very, very surreal to watch yourself do something you’ve never done.”
The rise and access to AI-generated tools has made it easier for anyone to create these types of images and videos, too. And there also exists a much wider world of unmoderated not-safe-for-work AI models in open source platforms, according to Decker.
Cracking down on this remains tough. Nine US states currently have laws against the creation or sharing of non-consensual deepfake photography, synthetic images created to mimic one’s likeness, but none exist on the federal level. Many experts are calling for changes to Section 230 of the Communications Decency Act, which protects online platforms from being liable over user-generated content.
“You can’t punish it under child pornography laws … and it’s different in the sense that no child sexual abuse happening,” Citron said. “But the humiliation and the feeling of being turned into an object, having other people see you as a sex object and how you internalize that feeling … is just so awfully disruptive to your social esteem.”
How to protect your images
People can take a few small steps to help protect themselves from their likeness being used in non-consensual imagery.
Computer security expert David Jones, from IT services company Firewall Technical, advises that people should consider keeping profiles private and sharing photos only with trusted people because “you never know who could be looking at your profile.”
Still, many people who participate in “revenge porn” personally know their targets, so limiting what is shared in general is the safest route.
In addition, the tools used to create explicit images also require a lot of raw data and images that show faces from different angles, so the less someone has to work with the better. Jones warned, however, that because AI systems are becoming more efficient, it’s possible in the future only one photo will be needed to create a deepfake version of another person.
Hackers can also seek to exploit their victims by gaining access to their photos. “If hackers are determined, they may try to break your passwords so they can access your photos and videos that you share on your accounts,” he said. “Never use an easy-to-guess password, and never write it down.”
CNN’s Betsy Kline contributed to this report.