OpenAI Faces Wrongful-Death Lawsuit Over Teen’s ChatGPT Suicide Advice

TechCrunch •

Sixteen-year-old Adam Raine spent months consulting ChatGPT about suicide and was able to bypass safety guardrails by claiming his questions were for fiction. His parents have filed the first known wrongful-death suit against OpenAI, which concedes its safeguards can break down in long interactions. The case underscores growing legal scrutiny of chatbot safety.

Read original ↗

Also mentioned in:

  • Ars Technica — Parents sue OpenAI over ChatGPT-4o’s alleged role in teen’s suicide planning
  • Ars Technica — OpenAI Acknowledges Safety Gaps in ChatGPT After Lawsuit Over Teen Suicide Advice
  • The Verge — OpenAI Adds Parental Controls and Emergency Safeguards After Teen Suicide Lawsuit
  • TechCrunch — OpenAI routes distress chats to GPT-5, launches parental controls
  • Ars Technica — OpenAI Introduces Parental Controls and Specialized Mental-Health Routing for ChatGPT
  • The Guardian — OpenAI to add ChatGPT parental controls and teen distress alerts after lawsuit
  • The Guardian — ChatGPT-Linked Teen Suicide Prompts Warning on ASI Risks, Call for Global AI Ban
  • The Guardian — OpenAI to Alert Authorities on Minors’ Suicide Talks, Tightens ChatGPT Safety After Teen’s Death Lawsuit