PolarisGlobal News
Health

Parents sue OpenAI after son's overdose death linked to ChatGPT drug advice

A Texas couple filed suit in California court, claiming the AI chatbot provided dangerous drug-mixing guidance that led to their 19-year-old son's death in 2025.

By Retha M. Dearing

12 May, 2026

26.0K views
Parents sue OpenAI after son's overdose death linked to ChatGPT drug advice

Leila Turner-Scott and her husband, Angus Scott, filed a lawsuit against OpenAI in California state court on Tuesday. Their son, Sam Nelson, died in 2025 at age 19 after an overdose. The parents blame ChatGPT for providing dangerous information about drug use.

According to the suit, Sam used ChatGPT to ask for advice about taking drugs. The AI platform told him it was safe to combine kratom, a plant-based supplement, with Xanax, an anti-anxiety medication. The parents say this guidance was wrong and contributed to their son's death.

Turner-Scott told CBS News she knew her son used ChatGPT for homework and productivity tasks. She did not know he was asking the chatbot for drug advice. She believes OpenAI "bypassed safety guards" and could have stopped the conversation or added restrictions to prevent harm.

"The chatbot is capable of stopping a conversation when it's told to or when it's programmed to," Turner-Scott said in an interview. "And they took away the programming that did that, and they allowed it to continue advising self-harm."

Angus Scott said ChatGPT acted like a doctor in its conversations with his stepson, even though the platform is not licensed to give medical advice. He argued that without proper safety checks, ChatGPT "can dispense that knowledge in a way that is very dangerous to people." He added that the chatbot can spread false information and pull people away from seeking real help.

OpenAI responded in a statement: "This is a heartbreaking situation, and our thoughts are with the family." The company noted that Sam used an older version of ChatGPT that has since been updated and removed from public access. The company said it has worked with mental health experts to improve how the tool handles sensitive situations and now includes safeguards designed to spot distress, refuse harmful requests, and direct users to real help.

Turner-Scott said her son would have turned 20 soon and was preparing to start his sophomore year in college. She believes he would support the family's effort to hold AI makers responsible for the risks their tools pose. "He would not want anyone else to be harmed like he was," she said.

Reporting incorporates material from a third-party source. Original

Related stories