NEW YORK: The already-alarming proliferation of child sexual abuse images on the Internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos, a watchdog agency warned on Oct 24.
In a written report, the UK-based Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated images of child sexual abuse overwhelms law enforcement investigators and vastly expands the pool of potential victims.
Already a subscriber? Log in
Get 20% OFF The Star Digital Access
Cancel anytime. Ad-free. Unlimited access with perks.
