BIG tech companies tend to act as “de facto private sovereigns” by defining, regulating and enforcing the boundaries of permissible speech, says Initiate.my in its latest special report, “Far-Right Extremism and Tech Accountability in Malaysia”.
Globally, the report notes, a small number of technology companies dominate the digital public square and do not operate as neutral intermediaries.
“Their algorithms and policies actively shape how information is produced, ranked, recommended, shared and monetised. Driven by commercial incentives and, at times, ideological alignment, these platforms increasingly influence user behaviour and public discourse by amplifying some voices and suppressing others,” it states.
As a result, this “concentrated rule-making power”, with limited accountability, exists outside democratic oversight.
“In this capacity, they function as setters of norms, interpreters of laws, arbiters of taste, adjudicators of disputes and enforcers of whatever rules they choose to establish.”
The report also says big tech’s business model profits from harm through engagement- driven algorithms.
“The engagement economy rewards content that activates strong emotional responses such as outrage, fear or resentment, because these reliably generate more clicks, shares and longer screen time, translating into higher advertising revenue.
“These same emotional triggers are systematically exploited by far-right propaganda, hate speech and disinformation.”
The study further finds that artificial intelligence (AI) remains inadequate for content moderation in “linguistically diverse and politically sensitive contexts”.
Using sentiment analysis to examine online discourse through both machine-assisted methods (ChatGPT and Mesolitica’s supervised sentiment models) and human-led approaches (manual labelling), the report highlights a central trade-off between the two.
“AI systems excel at processing large volumes of data but struggle with nuance, especially in underrepresented languages and culturally specific contexts.
“Human moderation, while slower and more labour-intensive, is essential for recognising implicit meaning, coded language and contextual cues that algorithms often miss.”
Based on the researchers’ experience, the report concludes that a hybrid model – particularly one led by human judgement – produces more precise distinctions between harmful and benign content.
The report also identifies inconsistent policy enforcement as a persistent challenge, allowing harmful material to slip through, especially when expressed in “ambiguous, coded or subcultural language”.
