US receives thousands of reports of AI-generated child abuse content in growing risk


Metas CEO Mark Zuckerberg TikToks CEO Shou Zi Chew X Corps CEO Linda Yaccarino Co-founder and CEO of Snap Inc. Evan Spiegel and Discords CEO Jason Citron attend the Senate Judiciary Committee hearing on online child sexual exploitation at the U.S. Capitol in Washington U.S. January 31 2024. REUTERSNathan Howard

Meta's CEO Mark Zuckerberg, TikTok's CEO Shou Zi Chew, X Corp's CEO Linda Yaccarino, Co-founder and CEO of Snap Inc. Evan Spiegel, and Discord's CEO Jason Citron attend the Senate Judiciary Committee hearing on online child sexual exploitation, at the U.S. Capitol, in Washington, U.S., January 31, 2024. REUTERS/Nathan Howard

(Reuters) - The U.S. National Center for Missing and Exploited Children (NCMEC) said it had received 4,700 reports last year about content generated by artificial intelligence that depicted child sexual exploitation.

The NCMEC told Reuters the figure reflected a nascent problem that is expected to grow as AI technology advances.

In recent months, child safety experts and researchers have raised the alarm about the risk that generative AI tech, which can create text and images in response to prompts, could exacerbate online exploitation.

The NCMEC has not yet published the total number of child abuse content reports from all sources that it received in 2023, but in 2022 it received reports of about 88.3 million files.

"We are receiving reports from the generative AI companies themselves, (online) platforms and members of the public. It's absolutely happening," said John Shehan, senior vice president at NCMEC, which serves as the national clearinghouse to report child abuse content to law enforcement.

The chief executives of Meta Platforms, X, TikTok, Snap and Discord testified in a Senate hearing on Wednesday about online child safety, where lawmakers questioned the social media and messaging companies about their efforts to protect children from online predators.

Researchers at Stanford Internet Observatory said in a report in June that generative AI could be used by abusers to repeatedly harm real children by creating new images that match a child's likeness.

SETTING THE STAGE FOR ENERGY ASIA

Content flagged as AI-generated is becoming "more and more photo realistic," making it challenging to determine if the victim is a real person, said Fallon McNulty, director of NCMEC's CyberTipline, which receives reports of online child exploitation.

OpenAI, creator of the popular ChatGPT, has set up a process to send reports to NCMEC, and the organization is in conversations with other generative AI companies, McNulty said.

(Reporting by Sheila Dang in Austin, Editing by Kylie MacLellan)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Others Also Read


All Headlines:

Want to listen to full audio?

Unlock unlimited access to enjoy personalise features on the TheStar.com.my

Already a member? Log In