If kids encounter things like deepfakes, they need to know they can talk to their parents without getting in trouble. — Pixabay
Schools are facing a growing problem of students using artificial intelligence to transform innocent images of classmates into sexually explicit deepfakes.
The fallout from the spread of the manipulated photos and videos can create a nightmare for the victims.
The challenge for schools was highlighted this fall when AI-generated nude images swept through a Louisiana middle school. Two boys ultimately were charged, but not before one of the victims was expelled for starting a fight with a boy she accused of creating the images of her and her friends.
"While the ability to alter images has been available for decades, the rise of A.I. has made it easier for anyone to alter or create such images with little to no training or experience,” Lafourche Parish Sheriff Craig Webre said in a news release. "This incident highlights a serious concern that all parents should address with their children.”
Here are key takeaways from AP's story on the rise of AI-generated nude images and how schools are responding.
The prosecution stemming from the Louisiana middle school deepfakes is believed to be the first under the state’s new law, said Republican state Sen. Patrick Connick, who authored the legislation.
The law is one of many across the country taking aim at deepfakes. In 2025, at least half the states enacted legislation addressing the use of generative AI to create seemingly realistic, but fabricated, images and sounds, according to the National Conference of State Legislatures. Some of the laws address simulated child sexual abuse material.
Students also have been prosecuted in Florida and Pennsylvania and expelled in places like California. One fifth grade teacher in Texas also was charged with using AI to create child pornography of his students.
Deepfakes started as a way to humiliate political opponents and young starlets. Until the past few years, people needed some technical skills to make them realistic, said Sergio Alexander, a research associate at Texas Christian University who has written about the issue.
"Now, you can do it on an app, you can download it on social media, and you don’t have to have any technical expertise whatsoever,” he said.
He described the scope of the problem as staggering. The National Center for Missing and Exploited Children said the number of AI-generated child sexual abuse images reported to its cyber tipline soared from 4,700 in 2023 to 440,000 in just the first six months of 2025.
Sameer Hinduja, the co-director of the Cyberbullying Research Center, recommends that schools update their policies on AI-generated deepfakes and get better at explaining them. That way, he said, "students don’t think that the staff, the educators are completely oblivious, which might make them feel like they can act with impunity.”
He said many parents assume that schools are addressing the issue when they aren’t.
"So many of them are just so unaware and so ignorant,” said Hinduja, who is also a professor in the School of Criminology and Criminal Justice at Florida Atlantic University.
"We hear about the ostrich syndrome, just kind of burying their heads in the sand, hoping that this isn’t happening amongst their youth.”
AI deepfakes are different from traditional bullying because instead of a nasty text or rumor, there is a video or image that often goes viral and then continues to resurface, creating a cycle of trauma, Alexander said.
Many victims become depressed and anxious, he said.
"They literally shut down because it makes it feel like, you know, there’s no way they can even prove that this is not real - because it does look 100% real,” he said.
Parents can start the conversation by casually asking their kids if they’ve seen any funny fake videos online, Alexander said.
Take a moment to laugh at some of them, like Bigfoot chasing after hikers, he said. From there, parents can ask their kids, "Have you thought about what it would be like if you were in this video, even the funny one?” And then parents can ask if a classmate has made a fake video, even an innocuous one.
"Based on the numbers, I guarantee they’ll say that they know someone,” he said.
If kids encounter things like deepfakes, they need to know they can talk to their parents without getting in trouble, said Laura Tierney, who is the founder and CEO of The Social Institute , which educates people on responsible social media use and has helped schools develop policies. She said many kids fear their parents will overreact or take their phones away.
She uses the acronym SHIELD as a roadmap for how to respond. The "S” stands for "stop” and don’t forward. "H” is for "huddle” with a trusted adult. The "I” is for "inform” any social media platforms on which the image is posted. "E” is a cue to collect "evidence,” like who is spreading the image, but not to download anything. The "L” is for "limit” social media access. The "D” is a reminder to "direct” victims to help.
"The fact that that acronym is six steps I think shows that this issue is really complicated,” she said. – AP
Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim’s (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to befrienders.org.my/centre-in-malaysia for a full list of numbers nationwide and operating hours, or email sam@befrienders.org.my.
