Child abuse images removed from AI image-generator training source, researchers say


Students walk on the Stanford University campus in Stanford, California. The cleaned-up version of the LAION dataset comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children. — AP

Artificial intelligence researchers said on Aug 30 they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a dataset used to train popular AI image-generator tools.

The LAION research dataset is a huge index of online images and captions that’s been a source for leading AI image-makers such as Stable Diffusion and Midjourney.

But a report last year by the Stanford Internet Observatory found it contained links to sexually explicit images of children, contributing to the ease with which some AI tools have been able to produce photorealistic deepfakes that depict children.

That December report led LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, to immediately remove its dataset. Eight months later, LAION said in a blog post that it worked with the Stanford University watchdog group and anti-abuse organisations in Canada and the United Kingdom to fix the problem and release a cleaned-up dataset for future AI research.

Stanford researcher David Thiel, author of the December report, commended LAION for significant improvements but said the next step is to withdraw from distribution the "tainted models” that are still able to produce child abuse imagery.

One of the LAION-based tools that Stanford identified as the "most popular model for generating explicit imagery” – an older and lightly filtered version of Stable Diffusion – remained easily accessible until Aug 29, when the New York-based company Runway ML removed it from the AI model repository Hugging Face. Runway said in a statement Friday it was a "planned deprecation of research models and code that have not been actively maintained.”

The cleaned-up version of the LAION dataset comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children.

San Francisco's city attorney earlier this month filed a lawsuit seeking to shut down a group of websites that enable the creation of AI-generated nudes of women and girls. The alleged distribution of child sexual abuse images on the messaging app Telegram is part of what led French authorities to bring charges on Wednesday against the platform's founder and CEO, Pavel Durov.

Durov's arrest "signals a really big change in the whole tech industry that the founders of these platforms can be held personally responsible,” said David Evan Harris, a researcher at the University of California, Berkeley who recently reached out to Runway asking about why the problematic AI image-generator was still publicly accessible. It was taken down days later. – AP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

PPPDAD0012
Musk's xAI seeks $113 billion valuation in $300 million share sale, FT reports
PPPDAD005
'Forest Blizzard' vs 'Fancy Bear' - cyber companies hope to untangle weird hacker nicknames
PPPKC001
Apple challenges 'unreasonable' EU order to open up to rivals
FW1
Google to spend $500 million revamping compliance in shareholder settlement
FW1
Astronomers fear impact of Musk's Starlink on South Africa mega-telescope observations
PPP  NYK512
Applied Digital and CoreWeave ink 15-year lease worth $7 billion
GGGFRM01
Meta aims to fully automate advertising with AI by 2026, WSJ reports
FW1
Stablecoin issuer Circle targets $7.2 billion valuation in upsized US IPO
PPPDAD003
Digital banking startup Chime targets $11.2 billion valuation in US IPO
FW1
Microsoft to invest $400 million in Switzerland on AI, cloud computing

Others Also Read