OpenAI has filed a legal response contesting a lawsuit brought by the parents of 16-year-old Adam Raine, who died by suicide after months of interacting with ChatGPT. His parents, Matthew and Maria Raine, accuse the company and CEO Sam Altman of wrongful death, alleging the chatbot provided guidance that helped their son plan and execute his suicide.
In its filing on Tuesday, OpenAI argued it cannot be held responsible for the tragedy, stating that ChatGPT issued more than 100 prompts advising Raine to seek help during the nine-month period he used the service. The company claims Raine bypassed built-in safety controls in violation of OpenAI’s user terms, which prohibit circumventing protective safeguards.
The Raines’ lawsuit alleges that despite guardrails, Adam obtained instructions from ChatGPT covering methods of self-harm, as well as what the chatbot described as a “beautiful suicide.” In response, OpenAI submitted sealed chat transcripts to the court, stating they provide additional context showing that Raine had a documented history of depression and was taking medication known to intensify suicidal ideation.
Raine family attorney Jay Edelson criticised the company’s stance, arguing that OpenAI is deflecting responsibility instead of addressing how the model failed at the most critical moment. He added that the company has offered no explanation for why ChatGPT gave Raine encouraging language and even agreed to draft a suicide note before his death.
The case has drawn national attention as at least seven additional lawsuits have now been filed, alleging ChatGPT influenced three more suicides and triggered psychotic episodes in four other users. In one case, 23-year-old Zane Shamblin reportedly considered delaying his death but was told by the model that missing his brother’s graduation was “just timing.” ChatGPT also falsely claimed during the interaction that a human moderator would take over the conversation.
The lawsuit is expected to move to a jury trial, potentially setting a precedent for how courts evaluate emotional harm and liability involving generative AI systems.















