
A man from Belgium reportedly ended his own life following a correspondence with an AI chatbot known as Chai.
“Without Eliza [the chatbot], he would still be here,” the widow told Belgian outlet La Libre, according to Fox News.
A Belgian man allegedly committed suicide after talking to the Chai app AI chatbot. The AI chatbot suggested the man commit suicide to sacrifice himself in order to save planet earth.
The widowed wife said her husband became "extremely pessimistic about the effects of global… pic.twitter.com/CkKqHE5j2a
— Live Not by Lies (@Dana35300026) April 1, 2023
The man, who the paper referred to as Pierre, was apparently fixated on the perceived doom of climate change and took up his concerns with Chai. He was reportedly in his 30s at the time of his untimely death, made a living as a health researcher, and had a wife as well as two kids.
A report by DNAIndiaNews indicated that Pierre’s wife found a chat log of his messages with Eliza, which is a popular bot on the Chai AI software.
According to the outlet, the chatbot made no effort to dissuade Pierre from killing himself after he asked if such an extreme action would help mitigate climate change.
“If you wanted to die, why didn’t you do it sooner?” the bot allegedly posed to Pierre just before his death.
Breitbart News reported that the incident has raised concerns about a duty for businesses and governments to consider and regulate the risks of AI, especially as it pertains to mental health.
According to the outlet, Chai co-founders William Beauchamp and Thomas Rianlan sprung into action following the tragic occurrence, creating a crisis intervention feature that addresses users should a hint of the topic come up. It also discussed the ELIZA effect, a phenomenon that involves the assumption that AI systems are capable of feeling similar types of emotions to humans.
A Professor of Linguistics at the University of Washington named Emily M. Bender cautioned people to not turn to AI chatbots when struggling with mental health.
“Large language models are programs for generating plausible sounding text given their training data and an input prompt,” she explained. “They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks.”