UK ditches ban on 'legal but harmful' online content in favour of free speech


A 3D printed Facebook's new rebrand logo Meta is seen in front of displayed Twitter logo in this illustration taken on November 2, 2021. REUTERS/Dado Ruvic/Illustration

LONDON (Reuters) - Britain will not force tech giants to remove content that is "legal but harmful" from their platforms after campaigners and lawmakers raised concerns that the move could curtail free speech, the government said on Monday.

Online safety laws would instead focus on the protection of children and on ensuring companies removed content that was illegal or prohibited in their terms of service, it said, adding that it would not specify what legal content should be censored.

Platform owners, such as Facebook-owner Meta and Twitter, would be banned from removing or restricting user-generated content, or suspending or banning users, where there is no breach of their terms of service or the law, it said.

The government had previously said social media companies could be fined up to 10% of turnover or 18 million pounds ($22 million) if they failed to stamp out harmful content such as abuse even if it fell below the criminal threshold, while senior managers could also face criminal action.

The proposed legislation, which had already been beset by delays and rows before the latest version, would remove state influence on how private companies managed legal speech, the government said.

It would also avoid the risk of platforms taking down legitimate posts to avoid sanctions.

Digital Secretary Michelle Donelan said she aimed to stop unregulated social media platforms damaging children.

"I will bring a strengthened Online Safety Bill back to Parliament which will allow parents to see and act on the dangers sites pose to young people," she said. "It is also freed from any threat that tech firms or future governments could use the laws as a licence to censor legitimate views."

Britain, like the European Union and other countries, has been grappling with the problem of legislating to protect users, and in particular children, from harmful user-generated content on social media platforms without damaging free speech.

The revised Online Safety Bill, which returns to parliament next month, puts the onus on tech companies to take down material in breach of their own terms of service and to enforce their user age limits to stop children circumventing authentication methods, the government said.

If users were likely to encounter controversial content such as the glorification of eating disorders, racism, anti-Semitism or misogyny not meeting the criminal threshold, the platform would have to offer tools to help adult users avoid it, it said.

Only if platforms failed to uphold their own rules or remove criminal content could a fine of up to 10% of annual turnover apply.

Britain said late on Saturday that a new criminal offence of assisting or encouraging self-harm online would be included in the bill.

($1 = 0.8317 pounds)

(Reporting by Paul Sandle; Editing by Alex Richardson)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

TSMC's Taipei-listed shares drop 4% after Q1 results
Gen Z and Millennials spend more on streaming than older generations
Netflix to stop reporting subscriber tally as streaming wars cool
Google consolidates its DeepMind and Research teams amid AI push
US power, tech companies lament snags in meeting AI energy needs
Meta releases early versions of its Llama 3 AI model
Exclusive-Microsoft's OpenAI partnership could face EU antitrust probe, sources say
Seeking edge over rivals, Intel first to assemble ASML's next-gen chip tool
TSMC estimates losses of $92.4 million due to Taiwan earthquake
Exclusive-Northrop Grumman working with Musk's SpaceX on U.S. spy satellite system

Others Also Read