Meta to reduce role of outside content moderators in favour of AI


The social media giant recently started testing more advanced AI tools built on large language models to help sift through posts and enforce its content rules. — Reuters

Meta Platforms Inc will soon cut back on its use of third-party vendors to help with content moderation, relying instead on advanced artificial intelligence systems to detect and remove posts that violate the company’s terms of service. 

Meta, which owns Facebook and Instagram, has used AI for years to detect spam and abusive posts at scale on its networks, and has also paid human moderators from companies like Accenture Plc to manually review and remove inappropriate posts. 

The social media giant recently started testing more advanced AI tools built on large language models to help sift through posts and enforce its content rules. Those more advanced solutions have improved its enforcement efforts, the company wrote in a blog post published March 19. The AI is better at spotting scams, identifying celebrity impersonators and catching adult sexual solicitation, among other things, Meta said, adding that the new systems "consistently perform better than our current methods of content enforcement.” 

Now the company plans to deploy those tools more broadly across its various apps – and will cut back on outside moderators as a result. 

"As we do this, we’ll reduce our reliance on third-party vendors for content enforcement and focus on strengthening our internal systems and workforce,” it said in the blog post. Meta will still use human reviewers for various nuanced cases and said that AI "doesn’t replace human judgment,” but it will instead rely more on in-house experts. 

"People will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement,” the company said. 

The transition will take a "few years,” Meta added, though it did not name specific third-party vendors it plans to cut back on. 

Meta has long relied on thousands of third-party contractors for its content moderation efforts – jobs that can expose human reviewers to some of the Internet’s darkest and most disturbing images and videos. But the company has also started to rely on AI for more and more tasks, including some engineering-related ones. Chief financial officer Susan Li said in January that the firm has seen a "30% increase in output per engineer” thanks to the adoption of AI agents that assist with coding. – Bloomberg

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Ripple, slam, explode:�How to send an iMessage with text effects
OpenAI plans desktop 'superapp' to streamline user experience
Nvidia to sell 1 million chips to Amazon by end of 2027 in cloud deal
Exclusive-German semiconductor group Elmos exploring a sale, sources say
Jeff Bezos aims to raise $100 billion to buy, revamp manufacturing firms with AI, WSJ reports
Winklevosses' Gemini Space Station sued by shareholders over strategy, departures
Poland plans to ban mobile phone use by under-16s in schools
Broadcom hit with EU antitrust complaint and request for interim measure
Google expands utility deals to curb data center power use during peak demand
OpenAI to buy Python toolmaker Astral to take on Anthropic

Others Also Read