Google is using its AI (artificial intelligence) technology to combat the rise of child sexual abuse material online.
The search giant is making the tech available for free to NGOs and Google industry partners via a new Content Safety API which it claims will significantly improve their ability to detect and report materials with elements of child sexual abuse.
By using deep neural networks for image processing, the toolkit will assist reviewers, who usually have to through tonnes of images, by identifying the most likely child sexual abuse material for immediate attention.
Google says quick identification on newer images means that children who are in danger of being sexually abused can be identified and protected from further harm.
It claims the technology can also “keep up with offenders” by targeting content that has not been previously identified as child sexual abuse material.
The Internet Watch Foundation, a charity based in England which aims to stop online child sexual abuse, welcomes the new AI by Google in a statement.
“We, and in particular our expert analysts, are excited about the development of an AI tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material. By sharing this new technology, the identification of images could be speeded up, which in turn could make the Internet a safer place for both survivors and users,” its CEO Susie Hargreaves said.
In Jan 2018, The Star reported that Malaysia has the highest number of IP addresses – close to 20,000 – uploading and downloading photographs and visuals of child pornography in South-East Asia.
For more information on the new AI technology by Google, check out www.blog.google.
Did you find this article insightful?