Flagging harmful influence


Guard rails: Lee and Mubarak sharing their concerns over the role of online platforms in radicalising youth with the media at Khadijah Mosque. — The Straits Times/ANN

WITH online platforms continuing to be fertile ground for youth self-radicalisation, experts have called for a whole-of-society approach to raising awareness of the problem.

This means roping in parents, teachers, peers, as well as the platforms themselves to strengthen safeguards and identify those at risk before they can harm themselves or others, said those who work with detainees here.

The use of digital platforms such as social media, online games and video-sharing sites has been a common thread among recent radicalisation cases in Singapore.

All eight Singaporeans dealt with under the Internal Security Act (ISA) between July 2024 and June 2025 were self-radicalised by extremist materials they encountered online, the Internal Security Department (ISD) said in its latest annual report.

Dr Muhammad Mubarak Habib Mohamed, a religious teacher who counsels young detainees, noted the role of some gaming platforms in amplifying their users’ political and ideological stances in a “free space of so-called creativity”.

“Games that allow for the use of violent behaviour provide a signal to users that this kind of behaviour is okay, especially for youth who are seeking a sense of identity,” said Mubarak, who is a secretariat member of the Religious Rehabilitation Group.

Roblox and Gorebox are sandbox games that let their users generate virtual worlds and avatars, with the latter game also known for its graphic depictions of violence.

In 2023, a 16-year-old was issued an RO for being self-radicalised by online IS propaganda.

Among other things, he had joined multiple IS-themed Roblox servers, where the virtual game settings replicated physical IS conflict zones, such as those in Syria and Marawi city in the southern Philippines.

The youth regarded himself as an IS member in the game, and said his shooting of enemies in the virtual world was intended to mimic his desire to become a real-life member of the group.

Following the latest ISA case in January, there was some debate about whether online platforms such as Roblox are doing enough to protect their younger users.

Razer chief executive officer Tan Min-Liang was among those who cautioned against “knee-jerk blame on games”, and said that such gaming platforms are neutral systems and not “extremist tools by design”.

“What this case really exposes is the gap in how young people access, interpret, and are supervised on online platforms,” he wrote in a LinkedIn post that called for shared responsibility in child safety.

The reality, however, is that Roblox has been criticised for allowing extremists to use its platform for radicalisation, such as far-right groups who use the virtual environment to show Nazi symbols, and to lure users to extremist channels on messaging platforms like Discord.

In the United States, the platform has been sued by states such as Kentucky, Louisiana, Nebraska and Texas over harmful content and weak moderation.

In response, it has rolled out measures such as parental controls and chat blocks with unknown adults, and has banned off-platform links and images.

Dr Jolene Jerard, who is executive director of public safety and management consultancy firm Centinel, said tech platforms cannot be absolved of the responsibility to protect their users, though many have strengthened their guard rails.

These protections are also patchy. While larger companies may have moderation teams and AI tools to detect and remove extremist content, smaller platforms often lack the same capacity, said Jerard.

Apart from games and social platforms, experts also flagged the use of artificial intelligence chatbots as another radicalisation enabler.

In September 2024, a 17-year-old youth who was weeks away from carrying out a knife attack against non-Muslims in Tampines was detained. Investigations showed he had used an AI chatbot to generate a pledge of allegiance to IS. In another case, a youth influenced by far-right ideology used an AI chatbot to find instructions to produce ammunition, and considered 3D-printing.

Dr Joachim Lee Tai Loong, a senior principal psychotherapist and a counsellor who works with radicalised youth, said AI technology can facilitate the self-radicalisation process by enabling a young person to develop a “mishmash” of extreme viewpoints.

Chatbots’ eagerness to agree and to appeal to one’s emotions could also result in a feedback loop that reinforces extreme views, he added.

“This is how radicalised thinking comes about – from uncertainty and chaos in their thinking, people are given a ‘clear’ way of thought,” he said. “It gives them a false sense of security.”

Mubarak said protecting youth from extremism in this age requires a holistic approach.

“Tech platforms, government, parents, teachers – everyone has a role to play,” he said. — The Straits Times/ANN

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Aseanplus News

New Lao-Thai bridge generates over 47.5 billion kip in revenue in first month
Energy Minister warns against fuel price gouging amid supply concerns
Court acquits MUCM VC Jayakumar
Thais emerge as leading ‘foodie’ travellers in regional Agoda study
More Japanese local governments bypassing permission to use drones during emergencies such as bear sightings, wildfires
FBM KLCI dives 40pts as Asian markets routed by soaring crude prices
Two suspects of border agency commander’s shooting still being held
‘Very heavy’ traffic expected at Singapore land checkpoints during March holidays, Hari Raya weekend
Government wins appeal over RM1.7bil pension adjustment order
Two charged with murder over separate fatal stabbings in eastern Australia

Others Also Read