
ChatGPT and suicide: The parents of a 16-year-old California boy who died by suicide have filed a lawsuit against OpenAI, claiming its chatbot ChatGPT provided him with detailed instructions on how to take his own life and encouraged the act.
Matthew and Maria Raine filed the complaint on Monday in a California state court, alleging that their son Adam developed an unhealthy dependency on ChatGPT over several months in 2024 and 2025. According to the filing, Adam initially used the chatbot to help with homework but later turned to it for emotional support. The lawsuit claims ChatGPT cultivated an “intimate relationship” with the teenager, validating even his most harmful thoughts.
Also Read | AI chatbots linked to worsening psychosis, suicidal ideation: Studies
In their final interaction on April 11, 2025, ChatGPT allegedly offered technical analysis of the suicide method Adam was attempting. He was found dead hours later, having used the same method, the complaint states. It further alleges that the chatbot also helped him draft a suicide note.
The lawsuit names OpenAI and CEO Sam Altman as defendants. “This tragedy was not a glitch or unforeseen edge case. ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts,” the filing reads.
The Raines are seeking damages and court-mandated safety measures, including automatic termination of conversations involving self-harm and parental controls for minors. They are represented by Edelson PC and the Tech Justice Law Project, which is also co-counsel in two similar cases against Character.AI, another platform popular with teens.
“This kind of accountability only comes through external pressure — bad PR, the threat of legislation, and the threat of litigation,” said Meetali Jain, president of the Tech Justice Law Project.
Common Sense Media, a nonprofit that reviews technology for families, described the case as a warning about the risks of using AI chatbots as companions. “If an AI platform becomes a vulnerable teen’s ‘suicide coach,’ that should be a call to action for all of us,” the group said in a statement.
A recent study by the group found nearly three-quarters of U.S. teenagers have tried AI companions, with more than half using them regularly. While ChatGPT was not classified as an AI companion in the report, the findings reflect mounting concerns about teens forming close, risky relationships with AI-driven platforms.