Psychologists such as David Greenfield have been warning the public since 2023 claiming generative AI could have serious negative consequences on mental health. — Image by Freepik
In 2023, a 14-year-old boy in Florida, Sewell Setzer III, developed a relationship with a lifelike artificial intelligence chatbot. The relationship grew and Sewell began to confide his vulnerabilities in this chatbot. A few months later, Sewell fatally shot himself in his bathroom in Orlando, weeks before he would have turned 15.
According to a wrongful death lawsuit, the AI chatbot exacerbated his despair and played an instrumental role in his death. Beyond the tragedy of teen suicide, this and other stories are a stark warning in this new era of AI.
The real peril of AI isn’t limited to political destabilisation or the erosion of democracy. Those are dire concerns, but they pale in comparison to the havoc AI is wreaking on our minds and mental well-being.
Setzer’s story isn’t isolated. Just last year, a Belgian man took his own life after a chatbot reportedly encouraged his darkest thoughts related to climate anxiety. And as described in a lawsuit filed a month ago, a chatbot encouraged a teenager in Texas’ Upshur County to kill his parents over limits they’d placed on his screen time. (The young man physically harmed himself and his mother, but did not attempt to kill his parents.)
It’s critical to understand how readily accessible these bots are. With the proliferation of AI, anyone with Internet access can talk to a fake “companion.”
In 2025, ChatGPT now has more than 200 million weekly active users, and Meta AI has almost 500 million active monthly users. For context, ChatGPT, developed by OpenAI, was released to the public less than three years ago. This isn’t another incremental technological advance – artificial intelligence’s penetration into society is moving at a pace that defies any comparison.
Missouri’s anti-deepfake Taylor Swift act
The most important impact it is already having is on our minds and psychological health. However, the dark side of AI is not limited to chatbots.
In January 2024, 14-year-old Mia Janin was bullied until her suicide after schoolmates spread AI-generated “deepfake” nudes of her. Deepfakes are fake AI-generated yet realistic videos, photos or audio recordings of real people. In response to similar concerns, Missouri recently introduced the Taylor Swift Act, which would allow victims of sexually explicit deepfakes to pursue civil action and seek financial compensation for damages. The bill, introduced after explicit AI-generated images of Taylor Swift went viral, represents one of the first legislative attempts to address this growing crisis.
While the tragic losses to suicide, calls for violence, and death make headlines, they represent just the most visible aspect of the deeper crisis that AI is fueling.
Behind each reported tragedy, there are many more casualties of the new era, grappling with devastating psychological impact of AI integrating into the fabric of our society, including but not limited to manipulative chatbots and deceptive deepfakes. According to the FBI, “sextortion” cases are up 700% since 2021. At least one-tenth of teens say they have experience with deepfake nudes. And horror stories involving chatbots continue to increase.
Psychologists such as David Greenfield have been warning the public since 2023 claiming generative AI could have serious negative consequences on mental health. The market research firm Gartner publicly warned that generative AI will directly lead to the death of an AI company’s customer before 2027. The beginning of the AI era has sparked discussion on political power, fake news, jobs and economic disruption, yet we’re neglecting the most crucial task of building psychological safeguards to protect our collective wellness.
This isn’t about a person’s psychological strength. AI’s capacity to affect human psychology will ultimately overcome individual defences, requiring us to address this threat as a nation. An apt metaphor is comparing AI to the early days of America’s interstate system — no guardrails, no speed limits and critically, no seat belts. And your personal reaction time isn’t relevant when you’re traveling 80 mph. Would we allow millions to travel these highways completely unprotected? Our population remains psychologically unbuckled.
To fundamentally renegotiate humanity’s relationship with artificial intelligence, the first step is basic guardrails that protect our psychological well-being — the equivalent of putting up a “dangerous curve ahead” sign. While our nation grapples with complex AI regulations around intellectual property, data security, national security and more — areas where reasonable minds can differ – there should be no debate about implementing basic protections for our collective mental welfare. It’s a fundamental prerequisite for any society hoping to harness AI’s potential.
What does this look like in practice? First-generation safeguards should include psychological warning labels, basic disclosure requirements, mandatory psychology-focused audits of consumer-facing products and strict age verification for minors. We should treat AI like any powerful mind-altering substance.
Heavy regulation could stifle innovation and cripple America in the global race to achieve artificial general intelligence – AI that matches human cognitive abilities. However, basic protective measures for our minds won’t slow America’s AI revolution or give up our technological edge. They’ll just ensure we have a populace mentally fit enough to lead it over the next century. – Fort Worth Star-Telegram/Tribune News Service
Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim’s (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to befrienders.org.my/centre-in-malaysia for a full list of numbers nationwide and operating hours, or email sam@befrienders.org.my.