Twitter still won’t say how it fights hate speech in France despite court orders


Six French organisations combating racism and homophobia that sued Twitter said in a statement Thursday that they’ve yet to receive the details ordered by the court. — AFP

Twitter Inc still hasn’t properly explained what resources it’s put in place in France to combat the spread of hate speech on its platform despite two consecutive court orders to do so, according to anti-racism and anti-homophobia organisations.

The Paris court of appeals on Thursday confirmed a prior ruling that had required Twitter provide the associations with information on the number of people working on treating reports from French users, where those staff were located, their nationality and languages they spoke as well as data on the amount of offensive tweets removed following reports.

While the initial July ruling gave Twitter two months to communicate the details, the platform still hasn’t fully complied, according to Stéphane Lilti, a lawyer for the Union of French Jewish Students. He says the data would be used to initiate damages proceedings.

Social media platforms from Twitter to Facebook are under growing pressure from lawmakers and associations across the world to explain the role they play in amplifying extremism, polarisation and hate speech. In Europe, France has been a key mover in shaping the bloc’s efforts to prevent the spread of hate speech and disinformation online and, more generally, curbing the power of tech giants.

Six French organisations combating racism and homophobia that sued Twitter said in a statement Thursday that they’ve yet to receive the details ordered by the court.

Twitter declined to comment on whether it would provide the data. In a statement, the social-media platform said that it’s “committed to building a safer Internet, fighting online hate, and improving the peace of public dialogue” and that users’ safety is a “top priority”.

Lilti said that if Twitter doesn’t come forward with the required information, the associations may seek a court order imposing daily fines for non-compliance.

Most social media companies typically say they lean heavily on artificial intelligence systems to scan for images, text and videos that look like they could violate rules. Twitter has also said it employs some 1,900 moderators worldwide.

“That’s clearly not sufficient for a moderation system worth its salt,” said the UEJF lawyer said. “Many French-language insults still fall through the cracks.” – Bloomberg

Article type: metered
User Type: anonymous web
User Status:
Campaign ID: 1
Cxense type: free
User access status: 0
Subscribe now to our Premium Plan for an ad-free and unlimited reading experience!
   

Next In Tech News

Amazon loses bid to toss consumer antitrust lawsuit
Walt Disney Co begins 7,000 layoffs
Mexico will not prohibit Chinese-owned TikTok app, says president
Crypto exchange Binance, CEO sued by US regulator over violations
U.S. issuing new childcare guidance for semiconductor chips subsidy program
Europol sounds alarm about criminal use of ChatGPT, sees grim outlook
Housing market in tech hubs cooling faster than other parts of US - report
Apple CEO meets China commerce chief to talk supply chain
Crown Resorts says ransomware group claims accessing some of its files
Jack Ma makes rare public appearance in China

Others Also Read