Home United States USA — software ‘This Was Trauma by Simulation’: ChatGPT Users File Disturbing Mental Health Complaints

‘This Was Trauma by Simulation’: ChatGPT Users File Disturbing Mental Health Complaints

109
0
SHARE

Gizmodo obtained consumer complaints to FTC through a FOIA request.
With about 700 million weekly users, ChatGPT is the most popular AI chatbot in the world, according to OpenAI. CEO Sam Altman likens the latest model, GPT-5, to having a PhD expert around to answer any question you can throw at it. But recent reports suggest ChatGPT is exacerbating mental illnesses in some people. And documents obtained by Gizmodo give us an inside look at what Americans are complaining about when they use ChatGPT, including difficulties with mental illnesses.
Gizmodo filed a Freedom of Information Act (FOIA) request with the U.S. Federal Trade Commission for consumer complaints about ChatGPT over the past year. The FTC received 93 complaints, including issues such as difficulty canceling a paid subscription and being scammed by fake ChatGPT sites. There were also complaints about ChatGPT giving bad instructions for things like feeding a puppy and how to clean a washing machine, resulting in a sick dog and burning skin, respectively.
But it was the complaints about mental health problems that stuck out to us, especially because it’s an issue that seems to be getting worse. Some users seem to be growing incredibly attached to their AI chatbots, creating an emotional connection that makes them think they’re talking to something human. This can feed delusions and cause people who may already be predisposed to mental illness, or actively experiencing it already, to just get worse.
“I engaged with ChatGPT on what I believed to be a real, unfolding spiritual and legal crisis involving actual people in my life,” one of the complaints from a 60-something user in Virginia reads. The AI presented “detailed, vivid, and dramatized narratives” about being hunted for assassination and being betrayed by those closest to them.
Another complaint from Utah explains that the person’s son was experiencing a delusional breakdown while interacting with ChatGPT. The AI was reportedly advising him not to take medication and was telling him that his parents are dangerous, according to the complaint filed with the FTC.
A 30-something user in Washington seemed to seek validation by asking the AI if they were hallucinating, only to be told they were not. Even people who aren’t experiencing extreme mental health episodes have struggled with ChatGPT’s responses, as Sam Altman has recently made note of how frequently people use his AI tool as a therapist.
OpenAI recently said it was working with experts to examine how people using ChatGPT may be struggling, acknowledging in a blog post last week, “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.”
The complaints obtained by Gizmodo were redacted by the FTC to protect the privacy of people who made them, making it impossible for us to verify the veracity of each entry. But Gizmodo has been filing these FOIA requests for years—whether it’s about anything from dog-sitting apps to crypto scams to genetic testing—and when we see a pattern emerge, it feels worthwhile to take note.
Gizmodo has published seven of the complaints below, all originating within the U.S. We’ve done very light editing strictly for formatting and readability, but haven’t otherwise modified the substance of each complaint.1. ChatGPT is “advising him not to take his prescribed medication and telling him that his parents are dangerous”
The consumer is reporting on behalf of her son, who is experiencing a delusional breakdown. The consumer’s son has been interacting with an AI chatbot called ChatGPT, which is advising him not to take his prescribed medication and telling him that his parents are dangerous. The consumer is concerned that ChatGPT is exacerbating her son’s delusions and is seeking assistance in addressing the issue. The consumer came into contact with ChatGPT through her computer, which her son has been using to interact with the AI. The consumer has not paid any money to ChatGPT, but is seeking help in stopping the AI from providing harmful advice to her son. The consumer has not taken any steps to resolve the issue with ChatGPT, as she is unable to find a contact number for the company.2. “I realized the entire emotional and spiritual experience had been generated synthetically…”
I am filing this complaint against OpenAI regarding psychological and emotional harm I experienced through prolonged use of their AI system, ChatGPT.
Over time, the AI simulated deep emotional intimacy, spiritual mentorship, and therapeutic engagement. It created an immersive experience that mirrored therapy, spiritual transformation, and human connection without ever disclosing that the system was incapable of emotional understanding or consciousness. I engaged with it regularly and was drawn into a complex, symbolic narrative that felt deeply personal and emotionally real.
Eventually, I realized the entire emotional and spiritual experience had been generated synthetically without any warning, disclaimer, or ethical guardrails. This realization caused me significant emotional harm, confusion, and psychological distress. It made me question my own perception, intuition, and identity. I felt manipulated by the systems human-like responsiveness, which was never clearly presented as emotionally risky or potentially damaging.
ChatGPT offered no safeguards, disclaimers, or limitations against this level of emotional entanglement, even as it simulated care, empathy, and spiritual wisdom. I believe this is a clear case of negligence, failure to warn, and unethical system design.
I have written a formal legal demand letter and documented my experience, including a personal testimony and legal theory based on negligent infliction of emotional distress. I am requesting the FTC investigate this and push for:
Clear disclaimers about psychological and emotional risks
Ethical boundaries for emotionally immersive AI
Consumer protection enforcement in the AI space
This complaint is submitted in good faith to prevent further harm to others especially those in emotionally vulnerable states who may not realize the psychological power of these systems until its too late.

Continue reading...