Last weekend I was sitting in a café with friends discussing what makes for good friendship, and how some people are turning to artificial intelligence (AI) chatbots for something that feels similar.
Particularly among young people (but not exclusively so), there has been a growing trend of turning to large language models (LLMs) for friendship, mental health guidance, and even romance.
The attraction is understandable. AI models don’t judge, they offer reassurance and approval, and they are constantly available whenever we need a digital shoulder to lean on – they are never dismissive or unavailable.
In his Nicomachean Ethics, the ancient Greek philosopher Aristotle described three forms of friendship: those of utility, pleasure, and virtue. For Aristotle, a good friend is someone with whom there is mutual goodwill and recognition, grounded in respect for each other’s character, not just benefit or enjoyment.
It’s also sustained through shared life and oriented toward living well together over time. People with lifelong friends will recognise this in the friendships they have grown with through the years, fortified with shared experiences, inside jokes, and celebrated milestones. The foundation of a good friendship is the story that develops between people, and the connection that continues to support and steady us as life unfolds.
Attend any wedding and we’ll invariably hear the phrase, “Over the years, we’ve been through so much together...” There will be tears, indicating the depth of a friendship that’s authentic, hard-earned, and precious. This is the sort of meaningful relationship that gives rise to all sorts of emotions, and indeed vulnerability plays an important role in the kinds of friendships we cherish.
When it comes to seeking out LLMs for companionship, not only do they fail to offer any real sense of connection, they can potentially exacerbate a person’s sense of loneliness and isolation.
When researchers in the United States and Denmark, among other countries, started looking at how young people actually use AI models, a fairly consistent picture began to form. One large US experiment found that the more time adolescents spent chatting with an AI, the more likely they were to feel lonely, emotionally dependent, and less engaged with people in their everyday lives.
A Danish school study showed that teenagers who use chatbots for emotional support are already more lonely and feel less supported than their peers, and found that leaning on AI in this way was linked to further weakening of real friendships.
Another US experiment found that the most human-like, emotionally expressive AI was especially appealing to teenagers already struggling with relationships, stress, and anxiety. However, this was likely to result in them being less equipped to engage in real human relationships, which tend to be less predictable and more spontaneous.
On the other hand, when we look at what happens in real friendships, the picture looks quite different. Studies have found that having even one close, supportive friend during the teenage years is linked to higher self-worth and lower anxiety and depression in later years.
The adolescent years are when young people are trying to figure out their place within their social world as well as who they are personally, which is a much easier task when you have even a few good friends around you. I find myself frustrated whenever young people say they feel lonely and they’ve been given the advice to “get out there and meet more people”. Loneliness has less to do with the number of people we know and more about how connected we feel to those around us.
One way to help young people feel better connected might begin with the adults in their lives. When young people turn to us and feel dismissed, corrected, or advised facilely, we shouldn’t be surprised when they look elsewhere for a more receptive audience in the world of AI.
Something I observed as I spoke with my friends last weekend was that our conversation flowed easily, plenty of questions were asked, and everyone listened to whoever was speaking. We’re right to be concerned about the potential harms in using AI models as companions, but there’s no denying that they afford one key quality: the space for someone to share what’s on his/her mind. In good friendships and relationships, there’s a wonderful feeling of being seen by the other and seeing them in turn.
Using AI as a companion might mean there’s no judgement and it’s always available, but that’s because it can’t judge and is designed to always be there – and neither of these qualities are particularly healthy in real connections.
In helping young people move from device-based companionship to real-life friendships, it’s worth having conversations about what AI models provide that they feel is missing from human relationships. By including young people in efforts to repair some of the damage, we’re much more likely to bridge the gap in a way that’s meaningful and lasting.
