Twitter Inc said on March 18 that it’s expanding its content moderation rules to capture more forms of misinformation around the novel coronavirus, following a similar escalation of measures from Facebook Inc earlier in the day.
The company will require users to remove tweets that deny expert guidance, encourage fake or ineffective treatments and preventions or falsely purport to be from experts or authorities. The goal is to capture anything “that increases the chance that someone contracts or transmits the virus”, it tweeted. Twitter will also take action against claims alleging that particular groups or nationalities are more susceptible to Covid-19, calling out anyone suggesting that Chinese people are more likely to have the disease.
For its part, Facebook is putting a Covid-19 information page at the top of users’ feeds and disseminating verified material from trusted sources such as the World Health Organisation. The two social media giants are focal points for discussion of the issue, which spans the spectrum from helpful and informative to harmful and malicious.
Twitter has historically been reluctant to remove or censor tweets and it continues to use similar language, calling its measures an effort to “protect the conversation”. But the company is now broadening its definition of harm to include more categories of potentially misleading content and act upon them. It’s also increasing its use of automated moderation, echoing a move adopted by Facebook and Google’s YouTube. — Bloomberg
Did you find this article insightful?
100% readers found this article insightful