US teen girls confront an epidemic of deepfake nudes in schools


Bregy, superintendent of the Beverly Hills Unified School District, said he and other school leaders wanted to set a national precedent that schools must not permit pupils to create and circulate sexually explicit images of their peers. Using AI, middle and high school students have fabricated explicit images of female classmates and shared the doctored pictures. — The New York Times

WESTFIELD, New Jersey: Westfield Public Schools held a regular board meeting in late March in late March at the local high school, a red brick complex in Westfield, New Jersey, with a scoreboard outside proudly welcoming visitors to the “Home of the Blue Devils” sports teams.

But it was not business as usual for Dorota Mani.

In October, some 10th grade girls at Westfield High School – including Mani’s 14-year-old daughter, Francesca – alerted administrators that boys in their class had used artificial intelligence software to fabricate sexually explicit images of them and were circulating the faked pictures.

Five months later, the Manis and other families say, the district has done little to publicly address the doctored images or update school policies to hinder exploitative AI use.

“It seems as though the Westfield High School administration and the district are engaging in a master class of making this incident vanish into thin air,” Mani, the founder of a local preschool, admonished board members during the meeting.

In a statement, the school district said it had opened an “immediate investigation” upon learning about the incident, had immediately notified and consulted with police, and had provided group counseling to the sophomore class.

“All school districts are grappling with the challenges and impact of artificial intelligence and other technology available to students at any time and anywhere,” Raymond González, superintendent of Westfield Public Schools, said in the statement.

Blindsided last year by the sudden popularity of AI-powered chatbots such as ChatGPT, schools across the United States scurried to contain the text-generating bots in an effort to forestall student cheating. Now a more alarming AI image-generating phenomenon is shaking schools.

Boys in several states have used widely available “nudification” apps to pervert real, identifiable photos of their clothed female classmates, shown attending events including school proms, into graphic, convincing-looking images of the girls with exposed AI-generated breasts and genitalia. In some cases, boys shared the faked images in the school lunchroom, on the school bus or through group chats on platforms such as Snapchat and Instagram, according to school and police reports.

Such digitally altered images – known as “deepfakes” or “deepnudes” – can have devastating consequences. Child sexual exploitation experts say the use of nonconsensual, AI-generated images to harass, humiliate and bully young women can harm their mental health, reputations and physical safety as well as pose risks to their college and career prospects.

Last month, the FBI warned that it is illegal to distribute computer-generated child sexual abuse material, including realistic-looking AI-generated images of identifiable minors engaging in sexually explicit conduct.

Yet the student use of exploitative AI apps in schools is so new that some districts seem less prepared to address it than others. That can make safeguards precarious for students.

“This phenomenon has come on very suddenly and may be catching a lot of school districts unprepared and unsure what to do,” said Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory, who writes about legal issues related to computer-generated child sexual abuse imagery.

At Issaquah High School near Seattle last fall, a police detective investigating complaints from parents about explicit AI-generated images of their 14- and 15-year-old daughters asked an assistant principal why the school had not reported the incident to police, according to a report from the Issaquah Police Department.

The school official then asked “what was she supposed to report”, the police document said, prompting the detective to inform her that schools are required by law to report sexual abuse, including possible child sexual abuse material. The school subsequently reported the incident to Child Protective Services, the police report said. (The New York Times obtained the police report through a public-records request.)

In a statement, the Issaquah School District said it had talked with students, families and police as part of its investigation into the deepfakes. The district also “shared our empathy”, the statement said, and provided support to students who were affected.

The statement added that the district had reported the “fake, artificial-intelligence-generated images to Child Protective Services out of an abundance of caution”, noting that “per our legal team, we are not required to report fake images to the police”.

At Beverly Vista Middle School in Beverly Hills, California, administrators contacted police in February after learning that five boys had created and shared AI-generated explicit images of female classmates. Two weeks later, the school board approved the expulsion of five students, according to district documents. (The district said California’s education code prohibited it from confirming whether the expelled students were the students who had manufactured the images.)

Michael Bregy, superintendent of the Beverly Hills Unified School District, said he and other school leaders wanted to set a national precedent that schools must not permit pupils to create and circulate sexually explicit images of their peers.

“That’s extreme bullying when it comes to schools,” Bregy said, noting that the explicit images were “disturbing and violative” to girls and their families. “It’s something we will absolutely not tolerate here.”

Schools in the small, affluent communities of Beverly Hills and Westfield were among the first to publicly acknowledge deepfake incidents. The details of the cases – described in district communications with parents, school board meetings, legislative hearings and court filings – illustrate the variability of school responses.

The Westfield incident began last summer when a male high school student asked to friend a 15-year-old female classmate on Instagram who had a private account, according to a lawsuit against the boy and his parents brought by the young woman and her family. (The Manis said they are not involved with the lawsuit.)

After she accepted the request, the male student copied photos of her and several other female schoolmates from their social media accounts, court documents say. Then he used an AI app to fabricate sexually explicit, “fully identifiable” images of the girls and shared them with schoolmates via a Snapchat group, court documents say.

Westfield High began to investigate in late October. While administrators quietly took some boys aside to question them, Francesca Mani said, they called her and other 10th-grade girls who had been subjected to the deepfakes to the school office by announcing their names over the school intercom.

That week, Mary Asfendis, principal of Westfield High, sent an email to parents alerting them to “a situation that resulted in widespread misinformation”. The email went on to describe the deepfakes as a “very serious incident”. It also said that, despite student concern about possible image-sharing, the school believed that “any created images have been deleted and are not being circulated”.

Dorota Mani said Westfield administrators had told her that the district suspended the male student accused of fabricating the images for one or two days.

Soon after, she and her daughter began publicly speaking out about the incident, urging school districts, state lawmakers and Congress to enact laws and policies specifically prohibiting explicit deepfakes.

“We have to start updating our school policy,” Francesca Mani, now 15, said in a recent interview. “Because if the school had AI policies, then students like me would have been protected.”

Parents including Dorota Mani also lodged harassment complaints with Westfield High last fall over the explicit images. During the March meeting, however, Mani told school board members that the high school had yet to provide parents with an official report on the incident.

Westfield Public Schools said it could not comment on any disciplinary actions for reasons of student confidentiality. In a statement, González, the superintendent, said the district was strengthening its efforts “by educating our students and establishing clear guidelines to ensure that these new technologies are used responsibly”.

Beverly Hills schools have taken a stauncher public stance.

When administrators learned in February that eighth grade boys at Beverly Vista Middle School had created explicit images of 12- and 13-year-old female classmates, they quickly sent a message – subject line: “Appalling Misuse of Artificial Intelligence” – to all district parents, staff, and middle and high school students. The message urged community members to share information with the school to help ensure that students’ “disturbing and inappropriate” use of AI “stops immediately”.

It also warned that the district was prepared to institute severe punishment. “Any student found to be creating, disseminating, or in possession of AI-generated images of this nature will face disciplinary actions,” including a recommendation for expulsion, the message said.

Bregy, the superintendent, said schools and lawmakers needed to act quickly because the abuse of AI was making students feel unsafe in schools.

“You hear a lot about physical safety in schools,” he said. “But what you’re not hearing about is this invasion of students’ personal, emotional safety.” – The New York Times

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Crypto company Tether invests $200 million in brain-chip maker Blackrock Neurotech
EU to probe Meta over handling of Russian disinformation, FT reports
US man charged with sex-related crimes, used Instagram to lure teens
Apple's iPadOS subject to tough EU tech rules, EU says
TikTok creators fear economic blow of US ban
OpenAI to use FT content for training AI models in latest media tie-up
ChatGPT faces Austria complaint for ‘uncorrectable errors’
Social media platform X back up after outages, Downdetector shows
Sleeping Amazon driver’s fatal crash into teacher was preventable, US lawsuit says
Elon Musk’s China trip pays off with key self-driving hurdles cleared

Others Also Read