AI-generated child sexual abuse images could flood the Internet. A watchdog is calling for action


Computer-generated child sexual abuse images made with artificial intelligence tools like Stable Diffusion are starting to proliferate on the Internet and are so realistic that they can be indistinguishable from photographs depicting actual children, according to a new report. — AP

NEW YORK: The already-alarming proliferation of child sexual abuse images on the Internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos, a watchdog agency warned on Oct 24.

In a written report, the UK-based Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly expands the pool of potential victims.

11.11 Flash Sale! Get 40% OFF Digital Access!

Monthly Plan

RM 13.90/month

RM 8.34/month

Billed as RM 8.34 for the 1st month, RM 13.90 thereafter.

Best Value

Annual Plan

RM 12.33/month

RM 7.40/month

Billed as RM 88.80 for the 1st year, RM 148 thereafter.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

US jury says Apple must pay Masimo $634 million in smartwatch patent case
Opinion: How to save Wikipedia from AI
Google ad tech antitrust trial closing arguments moved back
Apple intensifies succession planning for CEO Tim Cook, FT reports
Here’s why Google is warning you to avoid using public WiFi at all costs
Apple dropped a new iPhone accessory. The Internet can’t decide if it’s whimsical or a ‘piece of cloth’
Google plans $40 billion Texas data center investment amid AI boom
Berkshire reveals new $4.3 billion Alphabet stake, sells more Apple
Tiger Global slashes Meta stake by 63%
JPMorgan secures deals with fintech aggregators over fees to access data, CNBC reports

Others Also Read