Pornographic, AI-generated images of the world’s most famous star spread across social media this week, underscoring the damaging potential posed by mainstream artificial intelligence technology: its ability to create convincingly real and damaging images.
The fake images of Taylor Swift were predominantly circulating on social media site X, previously known as Twitter. The photos – which show the singer in sexually suggestive and explicit positions – were viewed tens of millions of times before being removed from social platforms. But nothing on the internet is truly gone forever, and they will undoubtedly continue to be shared on other, less regulated channels.
Swift’s spokesperson did not respond to a request for comment.
Like most major social media platforms, X’s policies ban the sharing of “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.” The company did not respond to CNN’s request for comment.
The incident comes as the United States heads into a presidential election year, and concerns are growing about how misleading AI-generated images and videos could be used to head up disinformation efforts and ultimately disrupt the vote.
“This is a prime example of the ways in which AI is being unleashed for a lot of nefarious reasons without enough guardrails in place to protect the public square,” Ben Decker, who runs Memetica, a digital investigations agency, told CNN.
Decker said the exploitation of generative AI tools to create potentially harmful content targeting all types of public figures is increasing quickly and spreading faster than ever across social media. “The social media companies don’t really have effective plans in place to necessarily monitor the content,” he added.
X, for example, has largely gutted its content moderation team and relies on automated systems and user reporting. (In the EU, X is currently being investigated over its content moderation practices).
Meta, too, made cuts to its teams that tackle disinformation and coordinated troll and harassment campaigns on its platforms, people with direct knowledge of the situation told CNN, raising concerns ahead of the pivotal 2024 elections in the US and around the world.
It’s unclear where the Taylor Swift-related images originated. Although some images were found on sites such as Instagram and Reddit, they were a widespread issue on X in particular.
The incident also coincides with the rise of AI-generation tools such as ChatGPT and Dall-E. However, there is also a much wider world of unmoderated not-safe-for-work AI models in open-source platforms, Decker said.
“This is indicative of a larger kind of fracturing content moderation and platform governance because if all the stakeholders – the AI companies, social media companies, regulators and civil society – are not talking about the same things and on the same page about how to address these issues, this type of content is just going to continue to proliferate,” he added.
Decker said, however, that Swift being targeted could bring more attention to the growing issues around AI-generated imagery. Swift’s enormous contingent of loyal “Swifties” expressed their outrage on social media this week, bringing the issue to the forefront. In 2022, a Ticketmaster meltdown ahead of her Eras Tour concert sparked rage online, leading to several legislative efforts to crack down on consumer-unfriendly ticketing policies.
Perhaps the same will be true about damaging AI-generated images, Decker suggested.
“When you have figures like Taylor Swift who are this big [targeted], maybe this is what prompts action from legislators and tech companies because they can’t afford to have America’s sweetheart be on a public campaign against them,” he said. “I would argue they need to make her feel better because she does carry probably more clout than almost anyone else on the internet.”
This type of technology has been used to create what’s known as “revenge porn” – posting explicit images of someone online without their consent – for a while now but is getting renewed attention now because of the offending photos of Swift.
Nine US states currently have laws against the creation or sharing of non-consensual deepfake photography, which are synthetic images created to mimic one’s likeness.