Outrage over deepfake porn images of Taylor Swift as generative AI tools worry regulators


A scourge of pornographic deepfake images generated by AI and sexualising people without their consent has hit its most famous victim, singer Taylor Swift, drawing attention to a problem that tech platforms and anti-abuse groups have struggled to solve. — AP

WASHINGTON: Fans of Taylor Swift and politicians, including the White House, expressed outrage on Jan 26 at AI-generated fake porn images of the megastar that went viral on X and were still available on other platforms.

One image of the US megastar was seen 47 million times on X, the former Twitter, before it was removed Jan 25. According to US media, the post was live on the platform for around 17 hours.

“It is alarming,” said White House Press Secretary Karine Jean-Pierre, when asked about the images.

“Sadly we know that lack of enforcement (by the tech platforms) disproportionately impacts women and they also impact girls who are the overwhelming targets of online harassment,” Jean-Pierre added.

Deepfake porn images of celebrities are not new but activists and regulators are worried that easy-to-use tools employing generative artificial intelligence (AI) will create an uncontrollable flood of toxic or harmful content.

Non-celebrities are also victims with increasing reports of young women and teens being harassed on social media with sexually explicit deepfakes that are more and more realistic and easy to manufacture.

The targeting of Swift, the second most listened-to artist in the world on Spotify (narrowly after Canadian rapper Drake), could shine a new light on the phenomenon with her legions of fans outraged at the development.

Last year Swift used her fame to urge her 280 million Instagram followers to vote.

Her fans also pushed US Congress to hold hearings about Ticketmaster when the company bungled the sale of their hero’s concert tickets in late 2022.

“The only ‘silver lining’ about it happening to Taylor Swift is that she likely has enough power to get legislation passed to eliminate it. You people are sick,” wrote influencer Danisha Carter on X.

X is one of the biggest platforms for porn content in the world, analysts say, as its policies on nudity are looser than Meta-owned platforms Facebook or Instagram.

This has been tolerated by Apple and Google, the gatekeepers for online content through the guidelines they set for their app stores on iPhones and Android smartphones.

In a statement, X said that “posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content”.

The Elon Musk-owned platform said that it was “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them”.

It was also “closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed”.

The images however continued to be available and shared on Telegram.

Swift’s representatives did not respond to a request for comment.

The star has also been the subject of rightwing conspiracy theories and even fake videos where she is falsely shown to be promoting high-priced cookware from France.

‘Easier and cheaper’

“What’s happened to Taylor Swift is nothing new. For years, women have been targets of deepfakes without their consent,” said Yvette Clarke, a Democratic congresswoman from New York who has backed legislation to fight deepfake porn.

“And with advancements in AI, creating deepfakes is easier & cheaper,” she added.

Tom Kean, a Republican congressman, warned that “AI technology is advancing faster than the necessary guardrails. Whether the victim is Taylor Swift or any young person across our country, we need to establish safeguards to combat this alarming trend”.

Legally mandated controls would need the passing of federal laws, which remains a longshot in a deeply divided US Congress.

US law currently affords tech platforms very broad protection from liability for content posted on their sites and content moderation is voluntary or implicitly imposed by advertisers or the app stores.

Many well-publicised cases of deepfake audio and video have targeted politicians or celebrities, with women by far the biggest targets through graphic, sexually explicit images found easily on the Internet.

Software to create the images is widely available on the Web.

According to research cited by Wired magazine, 113,000 deepfake videos were uploaded to the most popular porn websites in the first nine months of 2023.

And research in 2019 from a startup found that 96% of deepfake videos on the internet were pornographic. – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Singapore DBS’s digital services hit days after MAS ban ends
Nigeria court adjourns Binance and execs trial to May 17
Google faces closing arguments in US market power trial
Tesla interns say offers are getting revoked weeks before their start date
Man sexually assaults two women he met online on the same day, US cops say
AI startup Anthropic debuts Claude chatbot as an iPhone app
Microsoft will invest RM10.47bil in cloud and AI services in Malaysia
Sex offender asks Norway’s Supreme Court to declare social media access is a human right
Eight US newspapers sue ChatGPT-maker OpenAI and Microsoft for copyright infringement
Driver of lorry in crash that killed NUS law professor says he was distracted by GPS

Others Also Read