
According to a recent study, nearly 30% of Internet users modified potentially offensive comments after having received a nudge from a moderating algorithm. — Khosro/Shutterstock/AFP Relaxnews
Hostile and hateful remarks are thick on the ground on social networks in spite of persistent efforts by Facebook, Twitter, Reddit and YouTube to tone them down. Now researchers at the OpenWeb platform have turned to artificial intelligence to moderate Internet users' comments before they are even posted. The method appears to be effective because one third of users modified the text of their comments when they received a nudge from the new system, which warned that what they had written might be perceived as offensive.
The study conducted by OpenWeb and Perspective API analysed 400,000 comments that some 50,000 users were preparing to post on sites like AOL, Salon, Newsweek, RT and Sky Sports.
Subscribe to The Star Yearly Premium Plan for 30% off
Cancel anytime. Ad-free. Full access to Web and App.
Monthly Plan
RM 13.90/month
RM 9.73/month
Billed as RM 9.73 for the 1st month, RM 13.90 thereafter.
Annual Plan
RM 12.39/month
RM 8.63/month
Billed as RM 103.60 for the 1st year, RM 148 thereafter.