Decades of image-based sexual abuse: How perpetrators evade tech platforms and the authorities


FILE PHOTO: This photograph shows a set up smart-phone screen displaying the logo of main social media platforms including Instagram, Facebook, LinkedIn, Reddit, Telegram, X, Bluesky, Tiktok and Whatsapp. From IRC to deepfakes, perpetrators of image-based sexual abuse have taken advantage of regulatory lag. - AFP

SINGAPORE: When Amanda (not her real name), a part-time model in her 20s, responded to an audition call in 2024, she did not expect to be blackmailed with AI-generated deepfakes.

The advertisement was innocuous, resembling others in the Telegram group chat for freelancers in the local modelling industry.

It was the source of four of her past gig opportunities.

After the audition, a Caucasian man, who posed as a representative of an international agency, sent her photos taken during their meet-up which were edited by artificial intelligence into sexually explicit images.

They came with a threat that he would circulate them, unless she made payment of thousands of dollars.

She also began receiving messages from anonymous numbers, claiming to have seen the photos and asking her to meet.

The harassment persisted, extending to her loved ones, even after she made a police report.

“Even filing the report was so traumatising,” says Amanda, who has since stepped away from modelling. “At that point in time, I just forced myself to do it because I didn’t want someone else to go through this.”

Even so, the perpetrator continued putting out more audition calls until another woman in the Telegram group called him out as a scammer. To date, Amanda does not know his true identity.

Image-based sexual abuse – the creating, sharing or threatening to share sexually explicit imagery of someone without his or her consent – has taken on a new face in the age of AI.

But the experience of being violated, isolated and uncertain of what to do against often-anonymous perpetrators is one that survivors have been experiencing for decades.

Cat-and-mouse game

Dr Michelle Ho, assistant professor at the National University of Singapore’s (NUS) department of communication and new media, and the university’s Campus Sexual Misconduct in the Digital Age (CASMIDA) project team have been investigating the issue of online sexual harms in local universities since 2021.

Across two studies conducted between 2021 and 2025, they found that around two in five university students surveyed have experienced technology-facilitated sexual violence.

Digital sexual harassment was the most common type, followed by image-based sexual abuse.

“This suggests to us that technology-facilitated sexual violence rates – and by extension, image-based sexual abuse rates – have remained consistently high in the past five years,” says Ho, whose team has surveyed more than 3,000 undergraduates.

A 2025 study by researchers from Google and RMIT University in Australia of over 16,000 adults in 10 countries – including Australia, South Korea and the United States (but not Singapore) – found that more than one in five had experienced image-based sexual abuse.

Another study by Google and RMIT researchers surveying over 7,000 respondents in Australia, the United Kingdom and the US found that around three per cent of respondents have created, shared or threatened to share sexual AI deepfakes.

A higher rate – 18 per cent – reported deliberately viewing such material.

The persistence partly comes down to how perpetrators’ tactics have evolved, amid growing scrutiny from the authorities and tech platforms.

The Straits Times first reported in 2025 that a network of Telegram groups is distributing image-based sexual abuse material of men and appears to have been reconstituted five times.

Each new iteration sprung up after scrutiny shuttered the previous, usually under the same name but with a different number at the end.

The network sells access to its collection of explicit images and videos of men that it claims is shared non-consensually.

To find more customers, the group’s administrators conduct lottery-style games, dangling access to particular pictures and videos as rewards.

Users must share the group’s links on social media platform X (formerly known as Twitter), with one lucky user gaining “free” access to select files.

“I will select one person who completes all three steps above to win a random video in super High Definition quality,” entices one of the posts.

Beyond word of mouth, this appears to be how the group enlists new members.

This cat-and-mouse game extends to X, which has suspended all accounts used by administrators to post the links users are asked to reshare.

This has, however, not stopped the administrators from creating yet more accounts to continue the cycle.

For a one-off payment of S$600 (US$471), members are offered access to a library of thousands of images and clips.

The group is said to collect payments through cryptocurrency and multi-currency digital wallets, according to an ST report on May 1.

Another network advertising itself on the local sex-themed online forum Sammyboy focuses on explicit material of women described as “ordinary girls”, “wives”, “girlfriends” and sex workers.

The forum has long drawn scrutiny from the authorities over its role in sharing illegal material.

The network sells access to its explicit material using Verotel, a Netherlands-based payment provider, which was linked to “nudify” and deepfake pornography sites in a 2021 investigation by British media outlet The Times.

Payment providers used by mainstream online businesses tend to have rules against explicit and unlawful material.

On the forum, users share links to this network as well as advice on how one can circumvent the internet blocking of adult websites. Others share links to tools for creating AI deepfakes.

Long shadow of abuse

The tactics used by these groups to bypass tech platforms and the authorities go back decades.

Before WhatsApp, Telegram and MSN, there was Internet Relay Chat (IRC), a text-based messaging system introduced in the 1980s.

It was through an IRC group popular among gay men that Brandon (not his real name), a 42-year-old biomedical researcher, got to know an anonymous user who would cast a shadow over his life for the next two decades.

In 2005, after the two became friends over the platform and exchanged pictures over ICQ (a now-defunct messaging platform), the anonymous user began blackmailing Brandon.

Threatening to “out” him to his social circles, he instructed Brandon to show up at an address and perform a sex act on the person there.

Brandon threatened to file a police report. After that, their communication ceased.

In 2023, Brandon found out that the anonymous perpetrator was a friend who had since enmeshed himself within his social circle.

This discovery came to light when Brandon was moving files off an old device and realised the anonymous perpetrator and his “friend” shared the same phone number.

After connecting the dots, Brandon confronted his friend and warned him against contacting him or anyone in his circle again.

Looking back, Brandon says he never sought assistance from the authorities because doing so might have exposed him and his friends, in a time when being out of the closet was less accepted.

The shame heaped upon celebrities whose intimate images were leaked online also dissuaded him from sharing his experiences with others.

“There was no guarantee that we would be taken seriously or that my friends’ identities would be protected,” he says.

What can you do if you are the target of deepfake nudes?

Even as image-based sexual abuse has taken on new forms in subsequent decades, the experiences of victims follow similar contours.

Natalie (not her real name), a 27-year-old who works in the media sector, says that when she had her brush with image-based sexual abuse in 2019, she lacked the vocabulary to describe what was happening to her. Neither did her friends, who largely laughed it off.

On blogging platform Tumblr, an anonymous user had taken photos from her social media profiles and posted them alongside nude images of another woman, presenting them as the same person.

“This idea of violation, it’s lost on so many people that they package it as: ‘You should take it as a compliment,’” she says. “You’re like their fantasy, so why are you making a big deal out of this?”

Her cousin was the first to tell her about these images, which were reshared more than 200 times.

Anonymous users online harassed her after with messages saying that they had seen her nude photos.

“I didn’t even know what to feel,” she says. “I would feel really defensive and say I don’t care. But looking back, I really did care. It really affected how I perceived myself and how people perceived me.”

The experience made her seek out others who had gone through something similar.

After joining The Moxie Collective, an informal community of survivors who discuss their experiences through in-person events so they can heal together, she was introduced to the concept of image-based sexual abuse in 2023.

From meeting other survivors and hearing their stories of dealing with spy cameras, deepfakes and other ways technology intertwines with sexual harm, she says that one concerning feature is that those who have been victimised once are often targets for re-victimisation by other perpetrators, who reuse explicit material of them online.

“There are specific genres of porn for spy cameras, or this idea of the forbidden, leaked pornography,” says Natalie, noting that the violation of someone’s agency is itself the source of sexual gratification. “People don’t realise there’s a human behind what they’re watching.”

“Nobody does what happened to me any more,” she says. “That’s kind of obsolete. Now, everyone is leaning into deepfake pornography because of its lifelike realism.”

Such AI deepfakes go beyond editing one’s face onto a still image of a nude body, and now involve motion video.

Regulatory arms race

The term “regulatory lag” is used to describe the gap of time between the emergence of a new criminal method (such as deepfakes) and the implementation of effective oversight.

Tests conducted by the Infocomm Media Development Authority in 2026 found that most major social media platforms such as Facebook and Instagram took action on more than half of user reports of online harms, marking an improvement from 2025.

The exception was TikTok, which took action on a quarter of user reports.

The average response time to user reports fell from between three and 10 days to four days for most platforms.

An online safety commission is being set up in the first half of 2026. This new government agency will enable victims to request takedown directions be issued to platforms or to request perpetrator information (such as their IP address) from platforms if they wish to commence legal proceedings.

The commission, which will act as a one-stop shop for victims, is one of the measures to be implemented as part of the Online Safety (Relief and Accountability) Bill introduced in 2025.

Survivors of image-based sexual abuse speaking to ST welcome these changes, but add that there is a human cost to the time taken by tech platforms and the authorities to address the issue.

Charlotte (not her real name), a 25-year-old model and tutor who was non-consensually filmed during sexual intercourse and shared her story with ST in April 2025, says that crucial gaps remain.

In 2021, the then 20-year-old undergraduate reported her perpetrator to the police and his university.

After co-founding The Moxie Collective, she was subject to a new form of image-based sexual abuse.

In 2025, an anonymous user posted images of her face – along with her social media handle and full name – next to explicit images of another woman, presenting these as her nude images.

After making a police report, Charlotte recalls being told to make a Protection from Harassment Act (POHA) report.

However, as she did not know the identity of the person responsible, she was unable to do so. POHA reports require a respondent to be named.

“One gap I experienced first-hand was that POHA is difficult to apply if the perpetrator is anonymous or hiding behind a VPN,” says Charlotte.

Similarly, Colin (not his real name), 41, recalls a harrowing experience a decade ago when explicit images and videos of him were shared online on Tumblr, something he found out only when a friend who had spotted them alerted him.

“I contacted the dude to remove them, but he insisted on keeping them online,” he says.

After lodging a complaint with Tumblr, the platform removed the pictures and videos. However, the perpetrator retaliated by creating a website to publish Colin’s real name alongside his nude images.

“It was terrible, there was nothing I could do about it because he was overseas,” says Colin. “I tried contacting the authorities, but they told me there was nothing they could do. Even if they issued a block website order, the person could easily set up another site.”

How platforms find abuse material

One initiative aiming to address the issue is StopNCII.org, created by the UK-based non-profit Revenge Porn Helpline.

The website allows victims to generate a hash (a digital fingerprint) based on a video or an image on their device.

This hash, but not the original file, is shared with participating partners – which include Facebook, Pornhub and OnlyFans – who look for matches to the hash when removing material that violates their policies.

Such an approach is also used by platforms to target child abuse sexual material and content affiliated with terrorist groups.

However, hash sharing is also imperfect, as alterations of the underlying imagery can allow it to escape undetected.

It forms part of a broader effort to stem the spread of non-consensual intimate images.

NUS’ Ho and the CASMIDA team note that many social media platforms and dating apps have made strides in addressing cyberflashing – the practice of sending unsolicited explicit images – and the non-consensual sharing of images through safeguards such as automatic blurring and blocking users from taking screenshots.

At the same time, there is a growing market for non-consensual intimate images on widely used platforms like Telegram, as well as an increasing accessibility to AI-generative tools used to create such images like “nudify” apps.

“Because of the fast pace of change within digital landscapes and the internet, the threat of image-based sexual abuse, how it manifests and how we perceive it are constantly evolving,” says Ho.

How Kay Lii, chief executive of SG Her Empowerment (SHE), a local non-profit that runs an online harms support centre, says there is now more clarity from a regulatory standpoint “compared with just three years ago when SHE was newly established”.

“Back then, responses were largely reactive and victim-driven,” she adds.

Since 2023, SHE’s support centre has assisted more than 500 clients with counselling, legal advice and liaising with tech platforms.

There are now clearer regulator powers, statutory duties for platforms and more defined pathways for taking down content.

“This is a significant improvement from when takedown was more of a goodwill gesture,” says How.

However, platform policies differ and thresholds for removing content are inconsistent.

Some platforms deem a piece of content as violating their policies only if the face of the victim is recognisable. Victims seeking to take down content sometimes receive a response only if they escalate the issue through a “trusted flagger”, such as SHE’s support centre.

And even when the content is removed, re-uploads are common, and abuse material often spreads quickly across multiple platforms.

Once content moves onto private conversations or pornography sites, efforts to limit its reach become significantly harder.

“There is an important distinction between limiting visibility and permanently erasing content,” says How. “Complete eradication is rare once material has been circulated widely.”

She adds: “The work has shifted from ‘Can we remove it?’ to ‘How fast, how consistently and how comprehensively can we act before the harm multiplies?’”

‘I don’t want to make life harder for myself’

However, accessibility of technology alone does not explain the persistence of image-based sexual abuse in Singapore, as the culpability falls on those who exploit these technologies, says Ho.

Over the course of their five-year study, Ho and her team have found a slew of normalising behaviours that entrench the issue here.

Image-based sexual abuse is often downplayed or minimised. Victim blaming is prevalent and there is often a perception that such abuse is routine, especially for women.

Male victims are also reluctant to identify as such because victims are often assumed to be women.

“All of this is to say that we cannot talk about the technical factors that facilitate image-based sexual abuse without discussing the social factors underlying such harms in Singapore,” says Ho.

Among the survivors speaking to ST, a commonality is that one of the most hurtful aspects of having experienced this form of abuse is the insensitivity of those who they open up to.

For Amanda, other than close friends who received her deepfake photos and one who accompanied her to the police station, none of her family members or other friends know she had been blackmailed.

Many of them still do not see deepfakes as a genuine form of abuse, she laments. That the images are not real does not lessen the scorn and shame heaped on victims. For months after, she had nightmares about the experience.

“You can’t sleep thinking if other people saw and how many people it might have gone to,” she says. adding that vicious mockery ensues regardless of whether the images are real.

Being shamed for becoming a victim of such abuse has convinced her that it is better to keep things to herself in order to move on.

“I don’t want to make life harder for myself,” she adds. - The Straits Times/ANN

 

 

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
Singapore , image-based sexual abuse

Next In Aseanplus News

Indonesia preparing 3kg CNG cylinders for household use
Not enough time to play for South Korean children, teens: Report
Philippines says thousands evacuated as Mayon Volcano erupts
Roblox to require facial scans for children under 16 in Indonesia due to new social media rules
Air India CEO search narrows to Singapore Air exec Kannan and insider Aggarwal, sources say
These 20-year-olds in Japan are finally planning their junior high school trips cancelled by Covid-19
Man charged over death of Australian Indigenous girl that sparked outback riots
Taiwan president defiant on Eswatini trip; China calls him a 'rat'
Ringgit seen to trade within RM3.96-RM3.98 against US dollar next week
Thailand’s Chiangmai set for 40 deg C days, while heavy rainfall expected in Phuket, Krabi

Others Also Read