FED up with the constant stream of fake news on her family WhatsApp group chats in India – ranging from a water crisis in South Africa to rumours of a Bollywood actor’s death – Tarunima Prabhakar built a simple tool to tackle misinformation.
Tarunima, co-founder of India-based technology firm Tattle, archived content from fact-checking sites and news outlets, and used machine learning to automate the verification process.
The web-based tool is available to students, researchers, journalists and academics, she said.
“Platforms like Facebook and Twitter are under scrutiny for misinformation, but not WhatsApp,” she said of the messaging app owned by Meta, Facebook’s parent, that has more than two billion monthly active users, with about half a billion in India alone.
“The tools and methods used to check misinformation on Facebook and Twitter are not applicable to WhatsApp, and they also aren’t good with Indian languages.”
WhatsApp rolled out measures in 2018 to rein in messages forwarded by users, after rumours spread on the messaging service led to several killings in India. It also removed the quick-forward button next to media messages.
Tattle is among a rising number of initiatives across Asia tackling online misinformation, hate speech and abuse in local languages, using technologies such as artificial intelligence, as well as crowdsourcing, on-ground training and engaging with civil society groups to cater to the needs of communities.
While tech firms such as Facebook, Twitter and YouTube face growing scrutiny for hate speech and misinformation, they have not invested enough in developing countries, and lack moderators with language skills and knowledge of local events, experts say.
“Social media companies don’t listen to local communities. They also fail to consider context – cultural, social, historical, economic, political – when moderating users’ content,” said Pierre François Docquir, head of media freedom at Article 19, a human rights group.
“This can have a dramatic impact, online and offline. It can increase polarisation and the risk of violence.”
While the impact of hate speech online has already been documented in several Asian countries in recent years, analysts say tech firms have not ramped up resources to improve content moderation, particularly in local languages.
United Nations rights investigators said in 2018 that the use of Facebook had played a key role in spreading hate speech that fuelled the violence against Rohingya Muslims in Myanmar in 2017.
Facebook said at the time it was tackling misinformation and investing in Burmese-language speakers and technology.
In Indonesia, “significant hate speech” online targets religious and racial minority groups, as well as LGBTQ+ people, with bots and paid trolls spreading disinformation aimed at deepening divisions, a report from Article 19 found in June.“Social media companies ... must work with local initiatives to tackle the huge challenges in governing problematic content online,” said Sherly Haristya, a researcher who helped write the report on content moderation in Indonesia with Article 19.
One such local initiative is by Indonesian non-profit Mafindo, which backed by Google, runs workshops to train citizens – from students to stay-at-home mothers – in fact-checking and spotting misinformation.
Mafindo, or Masyarakat Anti Fitnah Indonesia, the Indonesian Anti-Slander Society, provides training in reverse image search, video metadata and geolocation to help verify information. — Reuters