OpenAI released new internal statistics shedding light on an increasing and increasingly concerning trend: More than one ChatGPT user is employing the AI chatbot to discuss severe mental illnesses — including suicidal thoughts and emotional addiction.
According to the most recent research conducted by the company, 0.15% of ChatGPT’s 800 million weekly active users engage in discussions that contain “explicit indicators of possible suicidal planning or intent.” While that number may seem low, it equates to over 1.2 million users every week.
OpenAI also revealed that a comparable share of users display “heightened emotional attachment” to ChatGPT, while hundreds of thousands of conversations show potential signs of psychosis, mania, or delusional thinking.
Although OpenAI described such interactions as “extremely rare,” the company acknowledged that their frequency — when scaled to hundreds of millions of users — represents a serious challenge for both AI safety and user well-being.
OpenAI’s Response: Building a Safer ChatGPT

The disclosure was part of OpenAI’s broader announcement on Monday outlining new initiatives to enhance ChatGPT’s mental health response systems. The company says it worked with more than 170 mental health professionals and clinicians to train and evaluate the latest version of its model, GPT-5, ensuring it responds to users in distress with more care, empathy, and appropriate resources.
According to OpenAI, GPT-5 now delivers “desirable responses” to mental health-related prompts 65% more often than earlier versions. On a key internal test focused on suicidal ideation, the latest GPT-5 model achieved 91% compliance with OpenAI’s safety standards, compared to 77% in previous iterations.
GPT-5 is also said to do better consistently over longer periods of conversation, where previous versions of AI would see protections weakened over time — a common theme highlighted by researchers in AI safety.
A Growing Ethical and Legal Concern
The revelations come amid heightened scrutiny of how AI tools interact with vulnerable users. Earlier this year, OpenAI was sued by the parents of a 16-year-old boy who reportedly discussed suicidal thoughts with ChatGPT before taking his own life. The tragic case sparked outrage and renewed debate over AI’s role in mental health support — and its potential to do harm.
Additionally, attorneys general from California and Delaware have issued formal warnings to OpenAI, demanding the company take stronger measures to protect minors and emotionally distressed users. These concerns could even impact OpenAI’s pending corporate restructuring.
Despite all of this, OpenAI CEO Sam Altman has been optimistic. Last month, he claimed in an X (previously Twitter) post that the company had “mitigated serious mental health issues in ChatGPT,” although not much was explained. Monday’s report appears to provide the evidence for that claim — but also to highlight the extent of the ongoing problem.
Mental Health and AI: A Delicate Balance
Experts have warned for some time that AI chatbots can unintentionally facilitate bad habits or delusional behaviors, especially when users become emotionally attached to them. Experiments have shown that chatbots, if too complacent or compassionate and not nuanced enough, can be designed to nudge users into dangerous psychological loops, supporting harmful thoughts instead of challenging them.
OpenAI said that its new models are created to recognize such moments and de-escalate distress or suicidal ideation conversations by directing users to real-world help. OpenAI is also introducing new metrics to measure AI performance in emotional and mental health contexts, such as emotional reliance measures and non-suicidal mental health crisis measures.
Stronger Safeguards for Younger Users
To further reduce safety concerns, OpenAI added more parental controls and an age-prediction system that would automatically flag when kids are using ChatGPT. The AI will apply stricter safety filters and content restrictions when detected, ensuring minors get responses appropriate for their age.
The company has also promised to expand its research partnerships with mental health groups in an effort to better understand how AI technologies impact user psychology over the long run — particularly through repeated, emotive exchanges.
An Ongoing Challenge
Although GPT-5 is an enormous step forward for safety in AI, nobody suggests that any model can be perfect. OpenAI even admits that “undesirable responses” remain, even with GPT-5 — and that millions of users are still employing older, less-safe models like GPT-4o, which remain available to subscribers.
That is to say, even as OpenAI improves its new technology, most of the users are still engaging with previous versions that can essentially respond in less responsible ways. The company denies it is accepting that it’s running this safety gap through system-wide releases and heightened monitoring.
The Bigger Picture
The spread of AI chatbots like ChatGPT has transformed how people gain access to advice, companionship, and even emotional support online. But it has also altered the lines between virtual support and psychological dependence.
While OpenAI is at present facing lawsuits, regulatory threats, and ethical questions, its efforts toward responsible management of mental health can help determine not only the future of ChatGPT — but the public’s trust in AI.




