Amirudin
Social media platforms and content providers, said Cybersecurity Malaysia chief executive officer Datuk Dr Amirudin Abdul Wahab, must be responsible for ensuring proper moderation, age verification, and protection mechanisms are in place – especially for minors.
At government level, the Malaysian Communications and Multimedia Commission and Royal Malaysia Police are actively monitoring social media platforms, identifying and removing hundreds of thousands of problematic contents, including sexual exploitation materials, scams, bullying, and disinformation, he said.
“The government has been strengthening laws and regulations such as the newly introduced Online Safety Act 2024 (ONSA 2024) which seeks to hold social-media and digital-service providers accountable – requiring them to implement safeguards to protect minors and remove harmful content.
“Age-restriction policies are also in the works to raise the minimum age for using social media to 16 years old,” he said, adding that digital-safety awareness and education campaigns are also regularly conducted to keep children safe online.
The Digital Ministry has also formulated the National Cyber Ethics Module (ESN) which will be fully implemented in all schools via the Education Ministry’s Digital Educational Learning Initiative Malaysia (Delima) platform next month.
“These efforts signal a recognition by authorities that the digital environment presents real and growing risks, and that protecting children and adolescents requires systemic regulation, oversight, public awareness, and cross-sector collaboration,” he added.
While the government plays an important role, parents too must do their part, Asia Pacific University of Technology & Innovation (APU) School of Psychology head Vinorra Shaker stressed.
There are plenty of measures parents can adopt to guide and protect their children online, she added.
“Instead of just telling children what not to watch, teach them how to process what they see.
“Encourage children to question what they are seeing: Why was this content created? What message is it trying to send? Who benefits from this violence being shown?” she offered.
Parents, she said, can also teach children how to identify sensationalism, misinformation, and propaganda.
They should also prioritise emotional literacy and empathy through discussing the differences between fictional violence and the genuine, devastating real-world impact of aggression.
“Help them connect their online actions be it words, threats or jokes, to the pain and trauma these inflict on others.
“Parents and educators must not be passive observers of media consumption; they must be active partners and give children a safe space to ask questions.
“When they encounter violent or disturbing scenes in news, movies, or games, seize the moment to have a discussion about the content,” she suggested.
CelcomDigi sustainability head Philip Ling said while risks vary by age, gender, online exposure and other factors, one of the most pervasive threats today is exposure to sexual content and harassment across platforms.
The risks to children online are both significant and growing, he warned, adding that online platform algorithms also play a part since these are designed to maximise user engagement, prioritising content that attract clicks, likes, and shares.
“This engagement-driven design can unintentionally result in children being exposed to content that is not age-appropriate,” said Ling.
While many platforms offer privacy and parental controls, young users often lack the digital skills and understanding to activate them, he explained.
“This is why CelcomDigi carries out advocacy and educational programmes for different segments of society, especially children and youths, through partnerships with regulators, non-governmental organisations, and global tech leaders,” he said, adding that one such initiative is the ‘SAFE Internet’ talks at schools.
Many other digital platforms have also taken preventative action to protect children, he said.
These range from parental control tools and artificial intelligence (AI)-powered content filtering systems to automated detection and removal of content that violates safety guidelines.
“As technology and digital platforms evolve, new threats emerge and detection tools face limitations like false positive challenges and adversarial tactics used to trick detection systems.
“Therefore, child online safety requires a multi-faceted approach. Effective safeguards must combine technology with human moderation, clear laws and regulations,” he concluded.

