What we know about ChatGPT’s new parental controls


A file photo of large banners with a ChatGPT advertising campaign in Chicago. OpenAI has introduced parental controls to its AI chatbot, ChatGPT, as teens increasingly turn to the platform for help with their schoolwork, daily life and mental health. — Jamie Kelter Davis/The New York Times

OpenAI on Sept 29 introduced parental controls to its artificial intelligence chatbot, ChatGPT, as teens increasingly turn to the platform for help with their schoolwork, daily life and mental health.

The new features came after a wrongful-death lawsuit was filed against OpenAI by the parents of Adam Raine, a 16-year-old who died in April in California. ChatGPT had supplied Adam with information about suicide methods in the final months of his life, according to his parents.

ChatGPT’s parental controls, announced in early September, were developed by OpenAI with Common Sense Media, a nonprofit that provides age-based ratings of entertainment and technology for parents.

Here’s what to know about the new features.

Parents can oversee their teens’ accounts.

To set controls, parents have to invite their child to link their ChatGPT account to a parent’s account, according to a new resource page.

Parents will then gain some controls over the child’s account, such as the option to reduce sensitive content.

Parents can set specific times when ChatGPT can be used. The bot’s voice mode, memory saving and image generation features can be turned on and off.

There is also an option to prevent ChatGPT from using its conversations with teens to improve its models.

Parents will be notified of potential self-harm.

In a statement Monday, OpenAI said that parents would be notified by email, text message or push alert if ChatGPT recognizes “potential signs that a teen might be thinking about harming themselves,” unless the parent has opted out of such notifications. Parents would receive a warning of a safety risk without specific information about their child’s conversations.

ChatGPT has been trained to encourage general users to contact a help line if it detects signs of mental distress or self-harm. When it detects such signs in a teen, a “small team of specially trained people reviews the situation,” OpenAI said in the statement. The statement did not specify who those people were.

OpenAI added that it was working on a process to reach law enforcement and emergency services if ChatGPT detects a threat but cannot reach a parent.

“No system is perfect, and we know we might sometimes raise an alarm when there isn’t real danger, but we think it’s better to act and alert a parent so they can step in than to stay silent,” the statement said.

Teens can bypass the controls.

OpenAI said Monday that it was still developing an age prediction system to help ChatGPT automatically apply “teen-appropriate settings” if it thinks a user is younger than 18.

With the new features, a parent will be notified if a teen disconnects their account from a parent’s account. But that won’t stop a teen from using the basic version of ChatGPT without an account.

Adam, the California teen who died in April, had learned to bypass ChatGPT’s safeguards by saying he would use the information to write a story.

“Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them,” OpenAI said.

In the statement with OpenAI on Monday, Robbie Torney, senior director for AI programs at Common Sense Media, said the parental controls would “work best when combined with ongoing conversations about responsible AI use, clear family rules about technology, and active involvement in understanding what their teen is doing online.”

(The New York Times sued OpenAI and Microsoft in 2023 for copyright infringement of news content related to AI systems. The two companies have denied those claims.) – ©2025 The New York Times Company

Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim’s (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to befrienders.org.my/centre-in-malaysia for a full list of numbers nationwide and operating hours, or email sam@befrienders.org.my

This article originally appeared in The New York Times.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Czech prime minister in favour of social media ban for under-15s
Analysis-Investors chase cheaper, smaller companies as risk aversion hits tech sector
PDRM calls for greater parental vigilance as grooming by online predators leads victims to share more CSAM content
New app helps you sit up straight while at your computer
Dispose of CDs, DVDs while protecting your data and the environment
'Just the Browser' strips AI and other features from your browser
How do I reduce my child's screen time?
Anthropic buys Super Bowl ads to slap OpenAI for selling ads in ChatGPT
Chatbot Chucky: Parents told to keep kids away from talking AI dolls
South Korean crypto firm accidentally sends $44 billion in bitcoins to users

Others Also Read