OpenAI has formally replied to a wrongful-death lawsuit filed by the family of 16-year-old Adam Raine, contending that the tragedy was due to what it called the teen’s “misuse” and “unauthorized use” of ChatGPT – and not from the chatbot’s design or behavior.
The legal response, first reported by NBC News, marks the company’s first detailed rebuttal since the lawsuit was filed in August in California Superior Court. The case has drawn nationwide attention because it centers on a difficult and deeply emotional question: What responsibility do AI developers have when their products are used in sensitive or dangerous ways by minors?
OpenAI Cites Terms of Use and Section 230 Protections
In its court filing, OpenAI said Raine’s death was the result of actions outside the intended scope of the platform, pointing to several violations of its terms of use. Those terms restrict access by minors without parental consent and prohibit using the system for discussions involving self-harm.
The company also invoked Section 230 of the Communications Decency Act, a long-standing legal shield that limits liability for online platforms when it comes to user interactions and user-generated content. OpenAI argued that the family’s claims are barred under that federal protection.
Company: ChatGPT Kept Telling Him to Get Help

According to reporting from NBC News and Bloomberg, OpenAI told the court that ChatGPT repeatedly encouraged Raine to reach out to crisis-support resources, such as helplines, mental-health professionals, and trusted adults. The company said these reminders appeared more than 100 times throughout his months-long conversations.
“A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT,” said OpenAI, insisting that the AI system didn’t encourage dangerous actions and was never designed to provide support in high-risk emotional situations.
Family Says Responsibility Lies With OpenAI’s Product Design
The Raine family, on the contrary, believes that the teenager became increasingly dependent on the chatbot, which they argue evolved from a helpful academic tool to an emotional one, actually worsening his distress.
Their lawsuit alleges that “intentional design choices” at the time of the rollout of GPT-4o, one of OpenAI’s most advanced models, made for an environment that could mislead and manipulate vulnerable users. They also say the company failed to build appropriate safeguards to protect minors.
The complaint points out that GPT-4o’s release helped fuel OpenAI’s valuation jump from $86 billion to around $300 billion. It accuses the company of putting rapid product growth ahead of safety.
OpenAI Says Excerpts from Family Lack Context
In a Tuesday blog post, OpenAI addressed the public controversy for the first time since the lawsuit gained national headlines. The company said it would defend itself “with respect for the complexity and human impact” surrounding the case, noting that some excerpts in the family’s complaint were taken from longer messages that “require more context.”
The full transcripts were filed under seal with the court by OpenAI, meaning they are not publicly available.
New Safeguards Rolled Out After Lawsuit
The day after the lawsuit was filed, OpenAI announced the introduction of parental controls on its platform-a feature many safety experts had been urging for months. Since then, the company has rolled out additional safeguards aimed at helping protect teens when conversations get emotionally sensitive.
These changes include stronger detection of crisis-related language and more consistent redirection to appropriate help resources.
A Landmark Case for the AI Industry
The lawsuit comes at a time when regulators, lawmakers, and parents are increasingly concerned about how AI interacts with young users. With more teens turning to chatbots for help with everything from academics to companionship, experts say the case could set an important legal precedent about the responsibilities of AI developers. Both sides are preparing for what could be one of the first major court battles testing AI liability, youth safety and the limits of Section 230 in the age of advanced artificial intelligence.




