Instagram moves to face rising tide of sextortion scams


The company is also implementing restrictions on a scammer's ability to view follower lists and interactions, as well as preventing screenshots in private messages. — Reuters

SAN FRANCISCO: Meta, the parent company of Facebook and Instagram, on Thursday announced new measures to fight sextortion, a form of online blackmail where criminals coerce victims, often teens, into sending sexually explicit images of themselves.

The measures include stricter controls on who can follow or message teen accounts and safety notices in Instagram direct messages and Facebook Messenger about suspicious cross-country conversations.

The measures beef up Instagram's "Teen Accounts," which were announced last month and are designed to better protect underage users from the dangers associated with the photo-sharing app.

The company is also implementing restrictions on a scammer's ability to view follower lists and interactions, as well as preventing screenshots in private messages.

Additionally, Meta is globally rolling out nudity protection features, which blur potentially nude images and prompt teens before they send one, in Instagram direct messages.

In certain countries, including the US and Britain, Instagram will show teens a video in their feeds about how to spot sextortion scams.

This initiative aims to help teens recognise signs of sextortion scams, such as individuals who come on too strong, request photo exchanges, or attempt to move conversations to different apps.

"The dramatic rise in sextortion scams is taking a heavy toll on children and teens, with reports of online enticement increasing by over 300% from 2021 to 2023," said John Shehan of the US National Center for Missing & Exploited Children.

"Campaigns like this bring much-needed education to help families recognize these threats early," he added on a Meta blog page announcing the measures.

The FBI earlier this year said sextortion online was a growing problem, with teenage boys the primary victims and offenders often located outside the US.

From October 2021 to March 2023, US federal officials identified at least 12,600 victims, with twenty of the cases involving suicides.

Meta's move to protect children came as pressure has been building across the globe against the social media giant founded by Mark Zuckerberg and its rivals.

Last October, some forty US states filed a complaint against Meta's platforms, accusing them of harming the "mental and physical health of young people," due to the risks of addiction, cyber-bullying or eating disorders.

For the time being, Meta refuses to check the age of its users in the name of confidentiality, and is urging legislation that would force ID checks at the level of a smartphone's mobile operating system, i.e. by Google's Android or Apple's iOS. – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Honeywell expects $470 million charge from Flexjet settlement
Uber, Lyft to test Baidu robotaxis in UK from next year
Tech influencer Lamarr Wilson dies by suicide at 48
Sam Altman’s cringe AI thirst trap says a lot about the future of OpenAI
Italy regulator fines Apple $115 million for alleged App Store privacy violations
The rise of deepfake cyberbullying poses a growing problem for schools
As US battles China on AI, some companies choose Chinese
Boys at her school shared AI-generated, nude images of her. After a fight, she was the one expelled
Banks in M'sia urge customers to update browsers and mobile OS for enhanced security
Waymos froze, blocked traffic during San Francisco power outage

Others Also Read