AI is making death threats way more realistic


Roper, who received death threats and hundreds of violent AI-generated images depicting herself dead, in London on Oct 13, 2025. Online harassers are generating images and sounds that simulate their victims in violent situations. — Charlotte Hadden/The New York Times

Even though she was toughened by years spent working in internet activism, Caitlin Roper found herself traumatised by the online threats she received this year.

The posts were part of a surge of vitriol directed at Roper and her colleagues at Collective Shout, an Australian activist group, on X and other social media platforms. Some of it was seemingly enabled – and given a visceral realism – by generative artificial intelligence. In some of the videos, Roper was wearing a blue floral dress that she does, in fact, own.

“It’s these weird little details that make it feel more real and, somehow, a different kind of violation,” she said. “These things can go from fantasy to more than fantasy.”

AI is already raising concerns for its ability to mimic real voices in service of scams or to produce deepfake pornography without a subject’s permission. Now the technology is also being used for violent threats – priming them to maximise fear by making them far more personalised, more convincing and more easily delivered.

“Two things will always happen when technology like this gets developed: We will find clever and creative and exciting ways to use it, and we will find horrific and awful ways to abuse it,” said Hany Farid, a professor of computer science at the University of California, Berkeley. “What’s frustrating is that this is not a surprise.”

Digitally generated threats have been possible for at least a few years. A judge in Florida was sent a video in 2023, most likely made using a character customisation tool in the Grand Theft Auto 5 video game, that featured an avatar who looked and walked like her being hacked and shot to death.

But threatening images are rapidly becoming easier to make, and more persuasive. One YouTube page had more than 40 realistic videos – most likely made using AI, according to experts who reviewed the channel – each showing a woman being shot. (YouTube, after The New York Times contacted it, said it had terminated the channel for “multiple violations” of its guidelines.) A deepfake video of a student carrying a gun sent a high school into lockdown this spring. In July, a lawyer in Minneapolis said xAI’s Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him and disposing of his body.

Until recently, artificial intelligence could replicate real people only if they had a huge online presence, such as film stars with throngs of publicly accessible photos. Now a single profile image will suffice, said Farid, who co-founded GetReal Security, a service that identifies malicious digital content. (Roper said she had worn the blue floral dress in a photo published a few years ago in an Australian newspaper.)

The same is true of voices – what once took hours of example data to clone now requires less than a minute.

“The concern is that now, almost anyone with no skills but with motive or lack of scruples can easily use these tools to do damage,” said Jane Bambauer, a professor who teaches about AI and the law at the University of Florida.

Worries about AI-assisted threats and extortion intensified with the September introduction of Sora, a text-to-video app from OpenAI. The app, which allows users to upload images of themselves to be incorporated into hyperrealistic scenes, quickly depicted actual people in frightening situations.

The Times tested Sora and produced videos that appeared to show a gunman in a bloody classroom and a hooded man stalking a young girl. Grok also readily added a bloody gunshot wound to a photo of a real person.

“From the perspective of identity, everyone’s vulnerable,” Farid said.

An OpenAI spokesperson said the company relied on multiple defences, including guardrails to block unsafe content from being created, experiments to uncover previously unknown weaknesses and automated content moderation systems. (The Times sued OpenAI in 2023, claiming copyright infringement of news content related to AI systems, an assertion that OpenAI has denied.)

Experts in AI safety, however, said companies had not done nearly enough. Alice Marwick, director of research at Data & Society, a nonprofit organisation, described most guardrails as “more like a lazy traffic cop than a firm barrier; you can get a model to ignore them and work around them.”

Roper said the torrent of online abuse starting this summer – including hundreds of harassing posts sent specifically to her – was linked to her work on a campaign to shut down violent video games glorifying rape, incest and sexual torture. On X, where most of the abuse appeared, she said, some harassing images and accounts were taken down. But the company also told her repeatedly that other posts depicting her violent death did not violate the platform’s terms of service. In fact, X once included one of her harassers on a list of recommended accounts for her to follow.

Some of the harassers also claimed to have used Grok not just to create the images but to research how to find the women at home and at local cafes.

Fed up, Roper decided to post some examples. Soon after, according to screenshots, X told her that she was in breach of its safety policies against gratuitous gore and temporarily locked her account.

Neither X nor xAI, the company that owns Grok, responded to requests for comment.

AI is also making other kinds of threats more convincing – for example, swatting, the practice of placing false emergency calls with the aim of inciting a large response from the police and emergency personnel. AI “has significantly intensified the scale, precision and anonymity” of such attacks, the National Association of Attorneys General said this summer. On a lesser scale, a spate of AI-generated videos showing supposed home invasions have caused the targeted residents to call police departments around the country.

Now perpetrators of swatting can compile convincing false reports by cloning voices and manipulating images. One serial swatter used simulated gunfire to suggest that a shooter was in the parking lot of a Washington state high school. The campus was locked down for 20 minutes; police officers and federal agents showed up.

AI was already complicating schools’ efforts to protect students, raising concerns about personalised sexual images or rumours spread via fake videos, said Brian Asmus, a former police chief who was working as the senior manager of safety and security for the school district when the swatter called. Now the technology is adding an extra security challenge, making false alarms harder to distinguish from true emergency calls.

“How does law enforcement respond to something that’s not real?” Asmus asked. “I don’t think we’ve really gotten ahead of it yet.” – ©2025 The New York Times Company

This article originally appeared in The New York Times.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Windows running slow? Microsoft’s 11 quick fixes to speed up your PC
Meta to let users in EU 'share less personal data' for targeted ads
Drowning in pics? Tidy your Mac library with a few clicks
Flying taxis to take people to London airports in minutes from 2028
Smartphone on your kid’s Christmas list? How to know when they’re ready.
A woman's Waymo rolled up with a stunning surprise: A man hiding in the trunk
A safety report card ranks AI company efforts to protect humanity
Bitcoin hoarding company Strategy remains in Nasdaq 100
Opinion: Everyone complains about 'AI slop,' but no one can define it
Google faces $129 million French asset freeze after Russian ruling, documents show

Others Also Read