Tech giants agree to child safety principles around generative AI


Amazon, Google, Meta, Microsoft and ChatGPT creator OpenAI are among the companies to have signed up to the principles designed to combat the creation and spread of AI-generated child sexual abuse material. — Photo: Philipp von Ditfurth/dpa

Some of the world’s biggest tech and AI firms have agreed to follow new online safety principles designed to combat the creation and spread of AI-generated child sexual abuse material.

Amazon, Google, Meta, Microsoft and ChatGPT creator OpenAI are among the companies to have signed up to the principles, called Safety By Design.

The commitments have been drawn up by child online safety group Thorn and fellow nonprofit All Tech is Human and sees the firms pledge to develop, deploy and maintain generative AI models with child safety at the centre in an effort to prevent the misuse of the technology in child exploitation.

The principles see firms commit to develop, build and train AI models that proactively address child safety risks, for example by ensuring training data does not include child sexual abuse material, as well as maintaining safety after their release by staying alert and responding to child safety risks that emerge.

Generative AI tools such as ChatGPT have become the key area of development within the technology sector over the last 18 months, with an array of AI models and content generation tools being developed and launched by the major firms.

The rapid rise has seen social media and other platforms flooded with AI-generated words, images and videos, with many online safety groups warning of the implications of more fake and misleading content being seen and spread online.

Earlier this year, the UK children’s charity the NSPCC warned that young people were already contacting Childline about AI-generated child sexual abuse material.

Speaking about the new agreed principles, Dr Rebecca Portnoff, vice president of data science at Thorn, said: “We’re at a crossroads with generative AI, which holds both promise and risk in our work to defend children from sexual abuse.

“I’ve seen first-hand how machine learning and AI accelerates victim identification and child sexual abuse material detection. But these same technologies are already, today, being misused to harm children.

“That this diverse group of leading AI companies has committed to child safety principles should be a rallying cry for the rest of the tech community to prioritise child safety through Safety by Design.

“This is our opportunity to adopt standards that prevent and mitigate downstream misuse of these technologies to further sexual harm against children. The more companies that join these commitments, the better that we can ensure this powerful technology is rooted in safety while the window of opportunity is still open for action.” –

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Opinion: Crypto was already in bad odour before jumping into bed with Trump. Now it smells worse
Robots that look 'cute' can influence human decisions, study finds
'Creepy good': AI can now tell your location from obscure photographs
Deepfake me: Are there risks to uploading your face for AI selfies?
Musk's xAI updates Grok chatbot after 'white genocide' comments
AI regulation ban meets opposition from state attorneys general over risks to US consumers
OpenAI to help UAE develop one of world's biggest data centers, Bloomberg News reports
Mike Novogratz's Galaxy Digital debuts on Nasdaq in bumper week for crypto
Verizon ending DEI programs as it seeks US approval for Frontier deal
Tesla to add Chipotle executive Jack Hartung to board

Others Also Read