PETALING JAYA: The Artificial Intelligence (AI) Governance Bill is a necessary and timely step toward responsible AI deployment in Malaysia, which demonstrates that clearer laws give confidence and certainty to investors, developers, as more users adopt AI in their daily lives, say experts on the matter.
Lawyer Thulasy Suppiah, who specialises in cybersecurity, AI, datacentres and emerging technologies, said that clear rules can help reduce regulatory ambiguity, allowing companies to design, deploy and invest in AI without fear of sudden bans, inconsistent enforcement or reputational risk.
“A legal framework signals that Malaysia welcomes AI-driven investment responsibly, with accountability across the AI lifecycle. Without clear rules, trust erodes – and trust is essential for sustainable AI growth and foreign investment.
“It ensures innovation grows with safeguards, not at the expense of women, children and vulnerable groups who are often the first to be victims of misuse of AI.
“Embedding accountability across the AI lifecycle also strengthens protection against misuse, including exploitation, harassment and deception,” she said in response to Malaysia’s first AI Governance Bill.
Asked about the challenges in coordinating with other agencies and laws on AI and threats such as deepfakes and AI-enabled scams, Thulasy said AI risks cut across multiple domains, including data protection, cybersecurity, content safety, fraud and consumer protection, requiring close coordination.
ALSO READ : New AI laws to arrest deepfakes
As such, she said aligning enforcement while avoiding overlap or gaps between agencies is complex, but necessary to ensure real-world protection, especially for women and children.
“The challenge is balancing speed, clarity, and proportionality without stifling legitimate innovation,” she said.
Cybersecurity expert Fong Choong Fook said the Bill should include risk classifications when it comes to AI systems alongside mandating impact assessments for high-risk AI.
Independent audits and conformity assessments are needed to ensure compliance alongside constant monitoring.
Fong said the Bill should enhance coordination efforts with existing enforcement regulations.
“It should supplement instead of duplicate. The key is ensuring accountability across the entire AI lifecycle.”
Malaysia, he said, should adopt a hybrid model when it comes to regulating AI.
This would comprise the formation of a central AI authority to set standards and coordinate oversight while sector regulators, such as those in the finance and telecommunication industries, carry out enforcement through their own domains.
“This provides consistency without losing on expertise,” he said.
On deepfake content, Fong said watermarks must be made mandatory for high-risk and high-reach content.
“We also need stronger platform takedown obligations, where platforms must comply with local regulations and will take swift action to remove non-compliant content, upon request” he said.
Universiti Putra Malaysia (UPM) AI specialist Azree Nazri said the Bill should mandate security-by-design standards to mitigate risks such as automated scams, system abuse and AI-enabled attacks.
“High-risk AI systems should undergo mandatory adversarial testing, strict model access controls and continuous monitoring with incident reporting,” he said.
On AI-enabled scams. Azree said telecom-style deterrents could form part of new measures to curb this.
He also stressed avoiding regulatory overlap to ensure aligned enforcement, prevent duplicate investigations, and deliver consistent oversight.

