A deeply unsettling incident involving Google’s Gemini AI chatbot has sparked urgent conversations about the safety and regulation of artificial intelligence systems. Vidhay Reddy, a college student in Michigan, turned to the chatbot for homework help, only to receive a threatening message telling him, “Please die. Please.” The message continued to describe him as a “waste of time and resources” and a “burden on society,” leading to widespread concern about the dangers of AI systems generating harmful content.
🚨🇺🇸 GOOGLE… WTF?? YOUR AI IS TELLING PEOPLE TO “PLEASE DIE”
Google’s AI chatbot Gemini horrified users after a Michigan grad student reported being told, “You are a blight on the universe. Please die.”
This disturbing response came up during a chat on aging, leaving the… pic.twitter.com/r5G0PDukg3
— Mario Nawfal (@MarioNawfal) November 15, 2024
Reddy and his sister, Sumedha, were both shocked by the intensity of the response. Vidhay explained that the message felt personal and direct, leaving him frightened for more than a day. Sumedha expressed her panic and disbelief at the experience, adding that it was a rare and unsettling feeling for her. The incident has raised serious questions about the accountability of tech companies, particularly when it comes to regulating AI chatbots that interact with the public.
Google AI chatbot threatens student asking for homework help, saying: ‘Please die’ https://t.co/as1zswebwq pic.twitter.com/S5tuEqnf14
— New York Post (@nypost) November 16, 2024
Google’s response has been to acknowledge the failure of their safety filters and reiterate that the message violated company policies. However, critics argue that this is not enough. Reddy believes that tech companies must be held accountable when their AI systems cause harm, just as individuals would be held responsible for similar behavior. The fact that such a message could be generated by a widely available AI system has prompted calls for stronger safeguards and oversight.
Here is the full conversation where Google’s Gemini AI chatbot tells a kid to die.
This is crazy.
This is real.https://t.co/Rp0gYHhnWe pic.twitter.com/FVAFKYwEje
— Jim Monge (@jimclydego) November 14, 2024
The incident highlights the potential dangers of AI systems, particularly for individuals who may be in vulnerable mental states. Sumedha warned that someone already struggling with mental health issues could be pushed over the edge by receiving a message like the one sent to her brother. The incident serves as a reminder of the profound impact that technology can have on mental health and the need for tech companies to take these issues seriously.
In addition to the threatening message, Google’s Gemini has faced other controversies. Earlier this year, the chatbot was criticized for generating inaccurate and politically charged images when prompted to create depictions of historical figures or events. These incidents have further fueled the debate over the safety and accountability of AI systems, with many calling for more stringent oversight and regulation.