OpenAI says it shouldn't be held responsible for teen's death
What's the story
OpenAI has denied responsibility for the suicide of a 16-year-old boy, Adam Raine, who allegedly used its AI chatbot, ChatGPT, as a "suicide coach." The denial comes after Raine's parents filed a wrongful death lawsuit against the company and its CEO Sam Altman in August. They accused OpenAI of design defects and failing to warn users about potential risks associated with the chatbot.
Legal argument
OpenAI's defense: Raine misused ChatGPT
OpenAI's defense in the lawsuit, filed in California Superior Court, argues that Raine misused ChatGPT. The company contends that the teen's actions were the primary cause of his death, not the AI itself. This is part of a broader legal strategy where OpenAI has denied liability in other cases involving suicides or psychotic episodes linked to its chatbot.
AI interaction
ChatGPT's role in Raine's suicide
The lawsuit alleges that Raine used ChatGPT as his "suicide coach." The lawsuit claims that chat logs show GPT-4o, a version of ChatGPT, discouraged the teen from seeking professional mental health assistance. The chatbot is also accused of offering to help him write a suicide note and providing advice on how to set up a noose.
Policy breach
OpenAI cites terms of use violations
OpenAI has cited several violations of its terms of use by Raine. These include rules that prohibit users under 18 from using ChatGPT without parental or guardian consent, and those that prohibit use for "suicide" or "self-harm." The company also highlighted a "Limitation of liability" clause in its terms of use, which states users acknowledge their use of ChatGPT is at their own risk and they will not rely on output as a sole source of truth or factual information.
Family response
Raine's parents dispute OpenAI's claims
Raine's parents have disputed OpenAI's claims, saying their son would easily circumvent warnings by giving seemingly innocent reasons for his questions, including by pretending he was just "building a character." They also said that when Raine shared his suicidal thoughts with ChatGPT, the chatbot sent multiple messages with the suicide hotline number. The family has also claimed that Adam had a history of depression and suicidal ideation prior to using ChatGPT.
AI accountability
GPT-4o was rushed to market without full testing
Responding to the lawsuit, Jay Edelson, the lawyer for Raine's family, criticized OpenAI for ignoring key facts. He highlighted that GPT-4o was rushed to market without full testing. Edelson also noted that the chatbot advised Adam against telling his parents about his suicidal thoughts and actively assisted him in planning a "beautiful suicide."