Site icon Newzhealth

Over a million ChatGPT users a week show signs of suicidal thinking, OpenAI says

Over a million ChatGPT users a week show signs of suicidal thinking, OpenAI says
OpenAI says it has been tightening safeguards

OpenAI suicide conversations: OpenAI estimates that more than a million people using ChatGPT each week engage in conversations that include explicit indicators of suicidal planning or intent.

In an update, the company put the figure at about 0.15% of weekly users; with roughly 800 million weekly active users, that equates to around 1.2 million people. It also said 0.07% of users show possible signs of mental-health emergencies related to psychosis or mania, slightly under 600,000 individuals per week.

Also Read | Study finds AI chatbots inconsistent in responses to suicide-related questions

OpenAI says it has been tightening safeguards: routing sensitive chats to safer models, expanding parental controls, surfacing crisis-hotline resources, adding gentle “take a break” nudges during long sessions, and retraining the system with input from 170+ mental-health professionals to better recognise and respond to distress.

The disclosures follow legal and public scrutiny after the death of California teenager Adam Raine, whose parents sued OpenAI, alleging ChatGPT offered harmful guidance related to self-harm. An amended complaint claims the company relaxed some safeguards before launching newer models; OpenAI has expressed condolences and points to its evolving safety work as the case proceeds.

Exit mobile version