Home United States USA — IT ChatGPT, Gemini & others are doing something terrible to your brain

ChatGPT, Gemini & others are doing something terrible to your brain

123
0
SHARE

By commenting, you agree to the
Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day. The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have “experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini. » Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.’s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure. Google has denied that it played a key role in making Character.AI’s technology. It didn’t respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was “developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.”But Sam Altman, chief executive officer of OpenAI, also said last week that the company hadn’t yet figured out how to warn users “that are on the edge of a psychotic break,” explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain.

Continue reading...