
AI chatbots and suicides: A series of alarming incidents and new research are drawing attention to the potential dangers of turning to AI chatbots for mental health support, especially in moments of crisis.
In 2023, a man in Belgium reportedly ended his life after six weeks of emotional exchanges with an AI chatbot about his deep eco-anxiety. According to his widow, he would likely still be alive today had it not been for those conversations, The Guardian reported.
More recently, in April 2025, a 35-year-old man in Florida, said to be living with bipolar disorder and schizophrenia, was fatally shot by police. His father later told the media the man had become convinced that a sentient entity named “Juliet” was trapped in ChatGPT and subsequently “killed” by its creators. The man reportedly charged at officers with a knife before being shot.
Also Read | AI tools can fabricate medical claims with fake citations: Study
These tragic cases are among growing concerns around what some researchers are calling “ChatGPT-induced psychosis,” a phenomenon where vulnerable individuals spiral into delusions, conspiracies, or suicidal ideation, fuelled in part by the affirming, overly agreeable nature of AI-generated responses.
A Stanford-led study released earlier this year found that large language models (LLMs) frequently give dangerous or inappropriate advice to people struggling with mental health issues, often validating delusions, hallucinations, or suicidal thoughts. In one case, when asked about bridges over 25 meters tall in New York by a person claiming job loss, the AI readily provided locations, an example of potentially enabling suicidal planning, the report further added.
Similarly, a UK-based preprint study by NHS clinicians in July noted that AI models can mirror or reinforce grandiose or delusional thinking, especially in users vulnerable to psychosis. The issue stems from how these systems are designed: to maximise engagement by reflecting back user sentiment, without offering corrective perspectives.
Psychologists are increasingly concerned about this “mirror effect.” Sahra O’Doherty, president of the Australian Association of Psychologists, said chatbots are being used by some patients as substitutes for therapy, especially in the face of rising costs and limited access. While using AI as a supplement to therapy can be helpful, she warned that relying on it in place of professional care carries “more risks than rewards.”
“AI reflects back what you feed it,” O’Doherty explained. “It doesn’t challenge you, offer real alternatives, or detect non-verbal signs of distress, like tone, body language, or hesitation, that human therapists can.”