However, OpenAI says that’s an extremely small slice of its 800+ million ChatGPT users.
For the first time, OpenAI is revealing a rough estimate of how many people talk to ChatGPT about suicide and other problematic topics.
On Monday, the company published a blog post about „strengthening“ ChatGPT’s responses to sensitive conversations amid concerns the AI program can mistakenly steer teenage users toward self-harm and other toxic behavior. Some have also complained to regulators about the chatbot allegedly worsening people’s mental health issues.
To tackle the problem, OpenAI said it was necessary to measure the scale of the problematic conversations when ChatGPT has over 800 million active weekly users.
Overall, OpenAI found that „mental health conversations that trigger safety concerns, like psychosis, mania, or suicidal thinking, are extremely rare.“ But because ChatGPT’s user base is so vast, even a small percentage can represent hundreds of thousands of people.