KUALA LUMPUR: The government is examining proposals to regulate the licensing of artificial intelligence (AI) applications to prevent the technology from being misused for child sexual abuse materials (CSAM).
Communications Minister Datuk Fahmi Fadzil said the AI apps were currently not licensed by the Malaysian Communications and Multimedia Commission (MCMC).
“We are looking at whether there is a need for enforcement by licensing so that appropriate action can be taken concerning CSAM materials (generated by AI).
“We can do it, but now the focus is on those who are sharing, misusing and sending such lewd or grossly offensive materials, which is an offence under the law.
“We leave this matter to the Digital Ministry to draft a specific Act to regulate the use of AI applications,” he said at a press conference after launching the 2026 Internet Safety Day (HKI) celebration at Taman Tasik Titiwangsa here, yesterday.
Recently Unicef stated that at least 1.2 million youngsters disclosed having had their images manipulated into sexually explicit deepfakes in the past year.
This was according to a study across 11 countries conducted by the UN agency, Interpol and the ECPAT global network working to end the sexual exploitation of children worldwide.

As such, Unicef called on governments to expand the definitions of CSAM to include AI-generated content and criminalise its creation, procurement, possession and distribution.
Additionally, the Centre for Countering Digital Hate (CCDH) estimated that Elon Musk’s Grok AI generated about three million sexualised images in two weeks, including 23,000 depicting children.
The situation has been described as disturbing by its chief executive Imran Ahmed, adding that “until regulators and lawmakers do their jobs and create a minimum expectation of safety, this will continue to happen”.
Following this, the MCMC blocked Grok temporarily on Jan 11 while ordering it to introduce safeguards in compliance with the laws. The chatbot was reinstated on Jan 23.
