Experts: AI can help users wade through misinformation


PETALING JAYA: The use of artificial intelligence (AI) in messaging platforms such as WhatsApp to prevent the spread of fake news could put an end to misinformation.

Social media analyst Assoc Prof Dr Sara Chinnasamy said the Aifa (Artificial Intelligence Fact-Check Assistant) chatbot can stop irresponsible users from spreading disinformation that could be used to fuel intolerance and undermine democracy.

“The chatbot is a quiz-like game that is fun, easy to access and can also help people find a way through the blizzard of misinformation currently disrupting life.

“Misinformation – or semi-fake news – is categorised as information which is misleading but not intentionally fabricated, unlike disinformation or fake news.

“But cracking down on the spread of fake news on the platform has not been easy as messages are encrypted. This allows users to exchange videos, texts and images without any verification from independent fact-checkers or even WhatsApp itself,” she said when contacted yesterday.

Earlier yesterday, Communications Minister Fahmi Fadzil said Aifa would be able to verify text messages sent via WhatsApp in four languages, namely English, Bahasa Malaysia, Mandarin and Tamil.

The Aifa initiative is spearheaded by the Malaysian Communications and Multimedia Commission (MCMC). It is accessible via the Sebenarnya.my portal and on WhatsApp at 03-8688 7997.

Centre for Independent Journalism (CIJ) executive director Wathshlah Naidu said the move is a positive effort to curb the proliferation and spread of disinformation, especially on messaging platforms.

While the chatbot can address the diversity of various languages in Malaysia, she said it is missing Sabah and Sarawak’s ethnic languages.

“Given the dynamics of our linguistic diversity, it would require the AI system to be adept at discerning nuanced language context as the datasets using coded language, euphemisms, local slang, use of visuals or emojis that evolve quickly with social and cultural shifts may be misinterpreted.

“It is critical to ensure its accuracy and be able to avoid the misclassification of legitimate information or news as false.

“It would also require advanced technology to distinguish deepfakes and ‘malinformation’,” she added.

‘Malinformation’ is usually defined as accurate information that is missing important context, which is disseminated with the malicious intent to mislead, such as dubiously edited videos.

Wathshlah said it would put the government as the sole arbiter of truth with the use of Aifa, in which the threshold of disinformation must be clearly defined to avoid the government possibly misusing it to control prevailing narratives.

She added that clear safeguards will be necessary to prevent the government, especially the MCMC, from being the sole arbiter of what is the truth.

“Pro-government bias in the development and deployment of the system would undermine public trust,” she argued.

Another key concern raised by Wathshlah is the fact that the government is exempted from the scope of the Personal Data Protection Act 2010.

“As such, we would require further guarantees that the data and personal information of the users will be protected.

“We would also need further disclosures and transparency on the obligations and accountability measures holding the developers, deployers and those managing data storage to account. This is to prevent data leaks and misuse of user data,” she added.

Universiti Sains Malaysia’s (USM) Prof Dr Selvakumar Manickam said that while AI-powered fact-checking systems are showing promise in combating the spread of fake news, they might not be the perfect solution.

“These sophisticated systems can rapidly analyse vast amounts of data, identify patterns and cross-reference information to flag potentially false claims.

“This speed and efficiency are invaluable, allowing human fact-checkers to focus on more complex cases.

“Nevertheless, AI struggles with context, sarcasm and humour, and can be tripped up by deepfake and manipulated media.

“Bias within training data can also lead to inaccurate assessments,” said Prof Selvakumar, who is Cybersecurity Research Centre Director at USM.

He added that cybercriminals would constantly evolve their tactics, which include the use of AI.

“This requires AI used by the authorities to be continuously updated. Language barriers and the risk of over-reliance on AI, leading to a decline in critical thinking, are further concerns,” he said.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
Aifa , Chatbot , AI Expert

Next In Nation

ACCIM urges new Cabinet ministers to address economic woes
Cabinet reshuffle: Policy continuity key to stability, prosperity, says Ahmad Zahid
Man remanded over shooting of petai collector in Kuala Pilah
New Cabinet a strong team to implement reform agenda, says Zambry
Recognition of certificates, including UEC, should align with national education system, says Zambry
GRS likely won't contest Lamag, Kinabatangan by-elections, says Hajiji
218 Indonesians repatriated from Sabah
Cabinet reshuffle: Anwar hopes new line-up strive with great commitment, responsibility
Continuous heavy rain warning for Terengganu, Pahang, Johor, until Thursday
Calls for stronger measures on crocodile, elephant conflicts raised in Sabah Assembly

Others Also Read