In a bizarre reply to a banal request for homework help, Google’s Gemini AI chatbot explicitly asked its user to perish.
Fears about AI rising up and annihilating humans are a bit premature at this point, but neural networks can still behave in some unpredictable and shocking ways at times. Case in point: Google’s Gemini AI chatbot just unsubtly told a human to die—but at least it was polite enough to say „please“ first.
Jokes aside, it really happened. The Gemini conversation link is still up, and available to look at on Google’s website if you’re interested. Here’s a screenshot of the full response that the AI gave to Vidhay Reddy, a 29-year-old student who had been asking the chatbot for help with his homework:
The Agent Smith-like response came out of nowhere, in response to a confusing prompt that seems to have been Vidhay putting his homework into the AI and hoping it gave him the answer.