Meta’s ‘bonfire’ of safety policies a danger to teens, charity says


After Meta chief executive Mark Zuckerberg met Donald Trump in November following his US election win, the social media giant is now moving Facebook and Instagram towards a more conservative-leaning focus on free speech. — Photo: Jens Büttner/dpa

LONDON: Meta’s recent “bonfire of safety measures” risks taking Facebook and Instagram back to where they were when Molly Russell died, says the UK charity set up in the name of the teen who died by suicide as a result of harmful social media posts.

The charity was set up by Molly’s family after her death in 2017, aged 14, when Molly chose to end her life after viewing harmful content on social media sites, including Meta-owned Instagram.

Earlier this month, boss Mark Zuckerberg announced sweeping changes to Meta’s policies in the name of “free expression”, including plans to scale back content moderation that will see the firm ending the automated scanning of content for some types of posts, instead relying on user reports to remove certain sorts of content.

Campaigners called the move “chilling” and said they were “dismayed” by the decision, which has been attributed to Zuckerberg’s desire to forge a positive relationship with new US President Donald Trump.

Andy Burrows, chief executive of the Molly Rose Foundation, said: “Meta’s bonfire of safety measures is hugely concerning and Mark Zuckerberg’s increasingly cavalier choices are taking us back to what social media looked like at the time that Molly died.

“Amid a strategic rollback of their safety commitments, preventable harm is being driven by Silicon Valley but the decision to stop it in its tracks now sits with the regulator and Government.”

The foundation has urged UK regulators to strengthen the Online Safety Act by bolstering requirements around content moderation, including requiring firms to proactively scan for all types of intense depression, suicide and self-harm content.

It also urges regulators to ensure that Meta’s new loosened policies around hate speech are not allowed to apply to children, and gain clarification on whether Meta can change its rules without going through traditional internal processes, after reports suggesting Zuckerberg made the policy changes himself, leaving internal teams “blindsided” – something Ofcom should ensure cannot happen again, the foundation said.

In a statement, a Meta spokesperson said: “There is no change to how we define and treat content that encourages suicide, self-injury and eating disorders.

“We don’t allow it and we’ll continue to use our automated systems to proactively identify and remove it.

“We continue to have Community Standards, around 40,000 people working on safety and security to help enforce them, and Teen Accounts in the UK, which automatically limit who can contact teens and the types of content they see.” – PA Media/dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Micron forecasts upbeat quarterly revenue on strong AI memory chip demand; shares rise
Perplexity AI in talks to raise funds at $18 billion valuation, source says
Nvidia to open quantum computing lab, CEO says
Apple shakes up AI executive ranks in bid to turn around Siri, Bloomberg News reports
X sues Modi's government over content removal in new India censorship fight
Meta to seek disclosure on political ads that use AI ahead of Canada elections
Amazon to sell carbon credits to suppliers, customers
Apple losing over $1 billion a year on streaming service, the Information reports
Amazon, Flipkart found to have violated Indian quality control rules during warehouse raids
EU health regulator clears use of AI tool in fatty liver disease trials

Others Also Read