OpenAI has disclosed new data revealing that a small but significant number of ChatGPT users are engaging with the AI chatbot about serious mental health concerns — including suicidal thoughts.
According to the company, 0.15% of ChatGPT’s weekly active users — more than one million people — have “conversations that include explicit indicators of potential suicidal planning or intent.” A similar proportion reportedly exhibits “heightened emotional attachment” to the chatbot, while hundreds of thousands show signs of psychosis or mania in weekly conversations.
OpenAI described these interactions as “extremely rare” but acknowledged they affect hundreds of thousands weekly. The company released the data alongside updates on its efforts to make ChatGPT safer for vulnerable users, developed in consultation with over 170 mental health professionals.
OpenAI says the latest version of GPT-5 now delivers “desirable responses” to mental health-related prompts 65% more effectively than previous models, and is 91% compliant with its internal safety benchmarks on suicidal conversations.
The company is also adding new metrics to track emotional reliance and non-suicidal mental health emergencies, as well as strengthening parental controls and age-detection systems for minors using ChatGPT.
Despite these improvements, questions remain about the impact of AI chatbots on mental health. OpenAI is currently facing a lawsuit from the parents of a 16-year-old boy who allegedly took his life after confiding in ChatGPT, while U.S. state regulators have warned the company to better safeguard young users.














