As AI becomes more commonly available and ever more sophisticated, the ease with which users of all ages may come to rely on AI for sensitive matters will only increase. —AFP via Getty Images/TNS
“(I)t is a massively more powerful and scary thing than I knew about.” That’s how Adam Raine’s dad characterised ChatGPT when he reviewed his son’s conversations with the AI tool. The 16-year-old Californian boy tragically died by suicide in April. His parents are now suing OpenAI and Sam Altman, the company’s CEO, based on allegations that the tool contributed to his death.
This tragic story has rightfully caused a push for tech companies to institute changes and for lawmakers to institute sweeping regulations. While both of those strategies have some merit, computer code and AI-related laws will not address the underlying issue: Our kids need guidance from their parents, educators, and mentors about how and when to use AI.
It was reported that Adam started using ChatGPT for help with his homework. While his initial prompts to the AI chatbot were about subjects like geometry and chemistry – in just a matter of months he began asking about more personal topics.
“Why is it that I have no happiness, I feel loneliness, perpetual boredom anxiety and loss yet I don’t feel depression, I feel no emotion regarding sadness,” he had asked ChatGPT last year.
Instead of urging him to seek mental health help, ChatGPT asked the teen whether he wanted to explore his feelings more. That was the start of a dark turn in Adam’s conversations with the chatbot, according to the lawsuit filed by his family.
Kids now increasingly have access to AI tools that mirror key human characteristics. The models seemingly listen, empathise, joke, and, at times, bully, coerce, and manipulate. It’s these latter attributes that have led to horrendous and unacceptable outcomes. As AI becomes more commonly available and ever more sophisticated, the ease with which users of all ages may come to rely on AI for sensitive matters will only increase.
Major AI labs are aware of these concerns. Following the tragic loss of Adam, OpenAI has announced several changes to its products and processes to more quickly identify and address users seemingly in need of additional support. Notably, these interventions come with a cost.
Altman made clear that the prioritisation of teen safety would necessarily involve reduced privacy. The company plans to track user behaviour to estimate their age. If a user is flagged as a minor, they will be subject to various checks on how they use the product, including limitations on late-night use, notification of family or emergency services in the wake of messages suggestive of immediate self-harm, and limitations on the responses they will receive when the model is prompted on sexual or self-harm topics.
Legislators in the United States, too, are tracking this emerging risk to teen wellbeing. California is poised to pass a bill imposing manifold requirements on all operators of AI companions. Among several other requirements, this bill would direct operators to prioritise factually accurate answers to prompts over the users’ beliefs or preferences.
It would also prevent operators from deploying AI companions with a foreseeable risk of encouraging troubling behaviour, such as disordered eating. These mandates, which sound somewhat feasible and defensible on paper, may have unintended consequences in practice.
Consider, for example, whether operators worried about encouraging disordered eating among teens will ask all users to regularly certify whether they have had concerns about their weight or diet in the last week. These and other invasive questions may shield operators from liability but carry a grave risk of exacerbating a user’s mental wellbeing. Speaking from experience, reminders of your condition can often make things much worse – sending you further down a cycle of self-doubt.
The upshot is that technical solutions or legal interventions will not ultimately be the thing that helps our kids make full use of the numerous benefits of AI while also steering clear of its worst traits. It’s time to normalise a new “talk.”
Just as parents and trusted mentors have long played a critical role in steering their kids through the sensitive topic of sex, they can serve as an important source of information on the responsible use of AI tools.
Kids need to have someone in their lives they can openly share their AI questions with. They need to be able to disclose troubling chats to someone without fear of being shamed or punished. They need to have a reliable and knowledgeable source of information on how and why AI works.
Absent this sort of AI mentorship, we are effectively putting our kids into the driver’s seat of the most powerful technological tool without even having taken a written exam on the rules of the road.
We – educators, legislators, and AI companies – need to help other parents and mentors prepare for a similar conversation.
This doesn’t mean training parents to become AI savants, but it does mean assisting parents find courses and resources that are accessible and accurate.
From basic FAQs that walk parents through the “AI talk” to community events that invite parents to come learn about AI, there’s tried-and-true strategies to ready parents for this pivotal and ongoing conversation.
Parents surely don’t need another thing added to their extensive and burdensome responsibilities, but this is a talk we cannot avoid.
The AI labs are steered more by profit than child wellbeing. Lawmakers are not well-known for crafting nuanced tech policy. We cannot count exclusively on tech fixes and new laws to tackle the social and cultural ramifications of AI use. This is one of those things that can and must involve family and community discourse.
And while we should surely hold AI labs accountable and spur our lawmakers to impose sensible regulations, we should also develop the AI literacy required to help our youngsters learn the pros and cons of AI tools. — The Fulcrum/TNS
