Discord adopts facial recognition in child safety crackdown


The San Francisco-based platform, popular among gamers, will use facial age estimation technology and identity verification through vendor partners to determine users' ages. — Unsplash

SAN FRANCISCO: Messaging platform Discord announced Feb 9 it will implement enhanced safety features for teenage users globally, including facial recognition, joining a wave of social media companies rolling out age verification systems.

The rollout, beginning in early March, will make teen-appropriate settings the default for all users, with adults needing to verify their age to loosen protections including content filters and bans on direct messaging, the company said.

The San Francisco-based platform, popular among gamers, will use facial age estimation technology and identity verification through vendor partners to determine users' ages.

Tracking software running in the background will also help determine the age of users without always requiring direct verification.

"Nowhere is our safety work more important than when it comes to teen users," said Savannah Badalich, Discord's head of product policy.

Discord insisted the measures came with privacy protections, saying video selfies for age estimation never leave users' devices and that submitted identity documents are deleted quickly.

The platform said it successfully tested the measures in Britain and Australia last year before expanding worldwide.

The move follows similar actions by rivals facing intense scrutiny over child safety and follows an Australian ban on under-16s using social media that is being duplicated in other countries.

Resorting to facial recognition and other technologies addresses the reality that self-reported age has proven unreliable, with minors routinely lying about their birthdates to circumvent platform safety measures.

Gaming platform Roblox in January began requiring facial age verification globally for all users to access chat features, after facing multiple lawsuits alleging the platform enabled predatory behavior and child exploitation.

Meta, which owns Instagram and Facebook, has deployed AI-powered methods to determine age and introduced "Teen Accounts" with automatic restrictions for users under 18.

Mark Zuckerberg's company removed over 550,000 underage accounts in Australia alone in December ahead of that country's under-16 social media ban.

TikTok has implemented 60-minute daily screen time limits for users under 18 and notification cutoffs based on age groups.

The industry-wide shift comes as half of US states have enacted or introduced legislation involving age-related social media regulation, though courts have blocked many of the restrictions on free speech grounds.

The changes come the same day as a trial in California on social media addiction for children begins in Los Angeles, with plaintiffs alleging Meta's and YouTube's platforms were designed to be addictive to minors. – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Banks launch sale of EA buyout's $5.75 billion cross-border loan
Encyclopedia Britannica sues OpenAI over AI training
Apple unveils second-generation AirPods Max at $549, more than five years after debut
Exclusive-OpenAI courts private equity to join enterprise AI venture, sources say
Meta shares jump after Reuters report on plans for layoffs of 20% or more
Crypto company Abra to go public in blank-check merger
Alibaba CEO takes helm of new AI-focused business group
Nebius signs AI infrastructure deals with Meta worth up to $27 billion over 5 years
European publishers, tech firms urge EU to speed up fine on Google over search
Russia fines Telegram app $432,366 for failing to remove banned content, Ifax reports

Others Also Read