SEOUL, South Korea: In 2020, as South Korean authorities were pursuing a blackmail ring that forced young women to make sexually explicit videos for paying viewers, they found something else floating through the dark recesses of social media: pornographic images with other people’s faces crudely attached.
They didn’t know what to do with these early attempts at deepfake pornography. In the end, the National Assembly enacted a vaguely worded law against those making and distributing it. But that did not prevent a crime wave, using AI technology, that has now taken the country’s misogynistic online culture to new depths.
In the past two weeks, South Koreans have been shocked to find that a rising number of young men and teenage boys had taken hundreds of social media images of classmates, teachers and military colleagues – almost all young women and girls, including minors – and used them to create sexually exploitative images and video clips with deepfake apps.
They have spread the material through chat rooms on the encrypted messaging service Telegram, some with as many as 220,000 members. The deepfakes usually combine a victim’s face with a body in a sexually explicit pose, taken from pornography. The technology is so sophisticated that it is often hard for ordinary people to tell they are fake, investigators say. As the country scrambles to address the threat, experts have noted that in South Korea, enthusiasm for new technologies can sometimes outpace concerns about their ethical implications.
But to many women, these deepfakes are just the latest online expression of a deep-rooted misogyny in their country – a culture that has now produced young men who consider it fun to share sexually humiliating images of women online.
“Korean society doesn’t treat women as fellow human beings,” said Lee Yu-jin, a student whose university is among the hundreds of middle schools, high schools and colleges where students have been victimised. She asked why the government had not done more “before it became a digital culture to steal photos of friends and use them for sexual humiliation”.
Online sexual violence is a growing problem globally, but South Korea is at the leading edge. Whether, and how, it can tackle the deepfake problem successfully will be watched by policymakers, school officials and law enforcement elsewhere.
The country has an underbelly of sexual criminality that has occasionally surfaced. A South Korean was convicted of running one of the world’s largest sites for images of child sexual abuse. A K-pop entertainer was found guilty of facilitating prostitution through a nightclub. For years, the police have battled spycam porn. And the mastermind of the blackmail ring investigated in 2020 was sentenced to 40 years in prison for luring young women, including teenagers, to make the videos that he sold online through Telegram chat rooms.
The rise of easy-to-use deepfake technology has added an insidious dimension to such forms of sexual violence: The victims are often unaware that they are victims until they receive an anonymous message, or a call from the police.
‘Slave’, ‘toilet’, ‘rag’
For one 30-year-old deepfake victim, whose name is being withheld to protect her privacy, the attack began in 2021 with an anonymous message on Telegram that said: “Hi!”
Over the next few hours, a stream of obscenities and deepfake images and video clips followed, featuring her face, taken from family trip photos she had posted on social media. Written on the body were words like “slave,” “toilet” and “rag”.
In April, she learned from the police that two of her former classmates at Seoul National University were among those who had been detained. Male graduates of the prestigious university, along with accomplices, had targeted scores of women, including a dozen former Seoul National students, with deepfake pornography. One of the men detained was sentenced to five years in prison last month.
“I cannot think of any reason they treated me like that, except that I was a woman,” she said. “The fact that there were people like them around me made me lose my faith in fellow human beings.”
She says she has struggled with trauma since the attack. Her heart races whenever she receives a message notification on her smartphone, or an anonymous call.
South Korea, whose pop culture is exported worldwide, has become the country most vulnerable to deepfake pornography. More than half of deepfakes globally target South Koreans, and the majority of those deepfakes victimise singers and actresses from the country, according to “2023 State of Deepfakes”, a study published by the US-based cybersecurity firm Security Hero. Leading K-pop agencies have declared war on deepfakes, saying they were collecting evidence and threatening lawsuits against their creators and distributors.
Still, the problem is intensifying. South Korean police reported 297 cases of deepfake sex crime between January and July, compared with 156 for all of 2021, when such data was first collected.
It was not until last month, when local news media exposed the extensive traffic in deepfakes on Telegram, that President Yoon Suk Yeol ordered his government to “root them out”. Critics of Yoon noted that during his 2022 campaign for the presidency, he had denied that there was structural gender-based discrimination in South Korea and had promised to abolish its ministry of gender equality.
News coverage of the rise in deepfakes this year led to panic among young women, many of whom deleted selfies and other personal images from their social media accounts, fearing they would be used for deepfakes. Chung Jin-kwon, who was a middle-school principal before assuming a role at the Seoul Metropolitan Office of Education last month, said his former school had discussed whether to omit student photos from yearbooks.
“Some teachers had already declined to have their photos there, replacing them with caricatures,” Chung said.
Young people in South Korea, one of the world’s most wired countries, become tech-savvy from an early age. But critics say its school system is so focused on preparing them for the all-important college entrance exams that they aren’t taught to handle new technology in an ethical way.
“We produce exam-problem-solving machines,” Chung said. “They don’t learn values.”
A push for tougher laws
Kim Ji-hyun, a Seoul city official whose team has counseled 200 teenagers implicated in digital sexual exploitation since 2019, said that some boys had used deepfakes to take revenge on ex-girlfriends – and that in some cases, girls had used them to ostracise classmates. But many young people were first drawn to deepfakes out of curiosity, Kim said.
Chat room operators attracted them with incentives, including Starbucks coupons, and asked them to provide photos and personal data of women they knew. Some of the Telegram channels, called “rape and humiliation rooms”, targeted individuals or women from certain schools, said Park Seong-hye, a team leader at the government-funded Women’s Human Rights Institute of Korea, who has investigated digital sex crimes and provided help to victims.
Under the law enacted in 2020, people convicted of making sexually explicit or abusive deepfakes with an intent to distribute them can be sent to prison for up to five years. Those who seek to profit financially from distributing such content can face up to seven years. But there is no law against buying, storing or watching deepfakes.
Investigators must have court approval to go undercover to access deepfake chat rooms, and they can only do so to investigate reports that minors have been sexually abused. The process can also be slow.
“You find a chat room on a holiday, but by the time you get court approval, it’s gone,” said Hahm Young-ok, a senior investigator of online crimes at the National Police Agency.
The government has promised to push for tougher laws against buying or viewing sexually exploitative deepfakes. This month, the police investigating the latest spate of deepfakes said they had detained seven male suspects, six of them teenagers.
Pornography is censored on South Korea’s Internet, but people can get around the controls by using virtual private networks, and the ban is hard to enforce on social media channels. The police have indicated that they might investigate whether Telegram had abetted deepfake sex crimes. Last month, Telegram’s founder, Pavel Durov, was arrested in France and charged with a range of offenses, including enabling the distribution of child sexual abuse material.
Telegram said in a statement that it “has been actively removing content reported from Korea that breached its terms of service and will continue to do so.”
Meanwhile, the government is being pressured to force online platforms to do more to filter out content like deepfake pornography.
“It’s time to choose between protecting the platforms and protecting our children and adolescents,” said Lee Soo-jung, a professor of forensic psychology at Kyonggi University. “What we see happening now in 2024 was foretold back in 2020, but we have done nothing in between.” – The New York Times