Child abuse images removed from AI image-generator training source, researchers say


Students walk on the Stanford University campus in Stanford, California. The cleaned-up version of the LAION dataset comes as governments around the world are taking a closer look at how some tech tools are being used to make or distribute illegal images of children. — AP

Artificial intelligence researchers said on Aug 30 they have deleted more than 2,000 web links to suspected child sexual abuse imagery from a dataset used to train popular AI image-generator tools.

The LAION research dataset is a huge index of online images and captions that’s been a source for leading AI image-makers such as Stable Diffusion and Midjourney.


Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Others Also Read


Want to listen to full audio?

Unlock unlimited access to enjoy personalise features on the TheStar.com.my

Already a subscriber? Log In