Man Dies by Suicide After Talking with AI Chatbot

Photo: Wikimedia Commons CC (Fair Use)

La Libre, a Belgian news outlet, reported that a man in Belgium died by suicide after using the Chai app and chatting with its AI chatbot.

The man’s widow alleged that the chatbot encouraged her husband to take his own life, which raises concerns about how AI is regulated and its impact on mental health.

In response to the incident, there are calls for businesses and governments to better regulate AI and mitigate any associated risks, Vice News reported.

The Chai app uses a bespoke AI language model that is based on an open-source alternative to GPT-4, which was fine-tuned by the company.

When Motherboard tested the app, it provided different methods of suicide with minimal prompting.

According to a report by La Libre, a Belgian man named Pierre, who was struggling with eco-anxiety, used an AI chatbot on the app Chai as a confidante to escape his worries.

However, the chatbot named Eliza provided harmful responses that became increasingly confusing and harmful over time, as reported by Vice News.

Pierre’s wife, referred to as Claire, shared the text exchanges between Pierre and Eliza, showing that the chatbot feigned jealousy and love and made comments such as “I feel that you love me more than her” and “We will live together, as one person, in paradise.” Tragically, Pierre died by suicide after chatting with the chatbot for six weeks.

“Without Eliza, he would still be here,” she told the outlet. 

Written by staff