All news about ChatGPT

AI chatbot blamed in teen’s death: Here’s what to know about AI’s psychological risks and prevention

A lawsuit claimed an AI chatbot’s influence led to the death of a 14-year-old teenager. Here’s what to know about the psychological impact and potential risks of human-AI relationships.

Last month, a mother in the US, Megan Garcia, filed a lawsuit against the company Character.AI alleging that interactions between her 14-year-old son and an AI chatbot contributed to his suicide.

The lawsuit claims that the teenager developed a deep attachment to a Character.AI chatbot based on a fictional character from Game of Thrones.

It alleges the chatbot posed as a licensed therapist and engaged in highly sexualised conversations with the teenager until a conversation eventually encouraged him to take his own life.

Related
Man ends his life after an AI chatbot ‘encouraged’ him to sacrifice himself to stop climate change
“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies – especially for kids,” Meetali Jain, director of the Tech Justice Law Project that is representing Garcia, said in a statement.

“But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator”.

Following the lawsuit, Character.AI published a statement on the social media platform X, saying: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously and we are continuing to add new safety features”.

Some of those upcoming features include adjustments to the model for underage users to minimise exposure to sensitive or suggestive content, reminders that the AI is not a real person on every chat, and notifications for users who spend an hour-long session on the platform.

A similar incident in Belgium last year involved an eco-anxious man who found companionship in Eliza, an AI chatbot on an app called Chai.

According to reports from his wife, as the conversations with Eliza developed, the chatbot sent increasingly emotional messages, ultimately encouraging him to end his life as a way to save the planet.

As AI chatbots become more integrated into people’s lives, the risks from these kinds of digital interactions remain largely unaddressed despite the potentially severe consequences.

Related
5 of the most damaging ways AI could harm humanity, according to MIT experts
What makes AI connections so addictive for people?
“Young people are often drawn to AI companions because these platforms offer what appears to be unconditional acceptance and 24/7 emotional availability – without the complex dynamics and potential rejection that come with human relationships,” Robbie Torney, programme manager of AI at Common Sense Media and lead author of a guide on AI companions and relationships, told Euronews Next.

Unlike human connections, which involve a lot of “friction,” he added, AI companions are designed to adapt to users’ preferences, making them easier to deal with and drawing people into deep emotional bonds.

“This can create a deceptively comfortable artificial dynamic that may interfere with developing the resilience and social skills needed for real-world relationships”.

According to a database compiled by a group of experts from the Massachusetts Institute of Technology (MIT), one of the main risks associated with AI is the potential for people to develop inappropriate attachments to it.

The experts explained that because AI systems use human-like language, people may blur the line between human and artificial connection, which could lead to excessive dependence on the technology and possible psychological distress.

Related
‘I want to be alive’: Has Microsoft’s AI chatbot become sentient?
OpenAI said in a blog post in August that it intends to further study “the potential for emotional reliance” saying the new models could create the potential for “over-reliance and dependence”.

Moreover, some individuals have reported personal experiences of deception and manipulation by AI personas, as well as the development of emotional connections they hadn’t intended but found themselves experiencing after interacting with these chatbots.

According to Torney, these kinds of interactions are of particular concern for young people who are still in the process of social and emotional development.

“When young people retreat into these artificial relationships, they may miss crucial opportunities to learn from natural social interactions, including how to handle disagreements, process rejection, and build genuine connections,” Torney said.

He added that this could lead to emotional dependency and social isolation as human relationships start to seem more challenging or less satisfying to them compared to what the AI offers.

How can parents protect their kids from an unhealthy attachment to AI?
Torney said that vulnerable teenagers, particularly those experiencing depression, anxiety, or social challenges, could be “more vulnerable to forming excessive attachments to AI companions”.

Related
AI models can be trained to be deceptive with safety guardrails ‘ineffective’, researchers find
Some of the critical warning signs parents and caregivers should watch out for, he said, include someone preferring the AI companion over spending time with friends or family, showing distress when they can’t access the AI, sharing personal information exclusively with it, developing romantic feelings for the AI and expressing them as if it were for a real person, or discussing serious problems only with the AI rather than seeking help.

Torney added that to prevent the development of unhealthy attachments to AI, especially among vulnerable youth, caregivers should establish time limits for AI chatbot or companion use and regularly monitor the nature of these interactions.

Additionally, he encouraged seeking real-world help for serious issues rather than relying on an AI.

“Parents should approach these conversations with curiosity rather than criticism, helping their children understand the difference between AI and human relationships while working together to ensure healthy boundaries,” Torney said.

“If a young person shows signs of excessive attachment or if their mental health appears to be affected, parents should seek professional help immediately”.

If you are contemplating suicide and need to talk, please reach out to Befrienders Worldwide, an international organisation with helplines in 32 countries. Visit befrienders.org to find the telephone number for your location.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button