LONDON, Feb 26 (Reuters) - Instagram said it would notify parents if their teenager repeatedly searches for terms related to suicide or self-harm within a short period, as pressure grows for governments to follow Australia's ban on the use of social media for under 16s.
Britain said in January it was considering restrictions to protect children online, after Australia's move in December. Spain, Greece, and Slovenia have in recent weeks said they are also looking at limiting access.
Instagram, owned by Meta Platforms Inc, said on Thursday it would start alerting parents who are signed up to its optional supervision setting if their children try to access suicide or self-harm content.
"These alerts build on our existing work to help protect teens from potentially harmful content on Instagram," the platform said in a statement. "We have strict policies against content that promotes or glorifies suicide or self-harm."
Its existing policy is to block such searches and redirect people to support resources, Instagram said, adding that it would begin the alerts from next week for those signed up in the United States, Britain, Australia and Canada.
Governments are increasingly seeking to protect children from harm online, particularly after worries over the AI chatbot Grok which has generated non-consensual sexualised images.
In Britain, measures designed to stop access to pornography sites for children have had implications for adults' privacy, and have led to tension with the U.S. over limits on free speech and regulatory reach.
Instagram's "teen accounts" for under 16s need a parent's permission to change settings, while parents can select an extra layer of monitoring with the agreement of their teenager.
(Reporting by Sarah Young, Editing by Paul Sandle)
