return to news
  1. OpenAI to add parental controls to ChatGPT after US suicide lawsuit; what's changing?

Business News

OpenAI to add parental controls to ChatGPT after US suicide lawsuit; what's changing?

Upstox

3 min read | Updated on September 03, 2025, 10:09 IST

Twitter Page
Linkedin Page
Whatsapp Page

SUMMARY

OpenAI announced new safeguards for teenagers using ChatGPT, including parental controls, crisis response tools, and expanded mental health protections, following a wrongful death lawsuit by a California family.

OpenAI ChatGPT

OpenAI is partnering with physicians and experts under a 120-day plan to strengthen safety, improve emergency access, and use advanced “reasoning models” to better handle distress situations.

OpenAI said Tuesday it will roll out new safeguards for teenagers using ChatGPT and expand mental health protections in its artificial intelligence systems, just days after a California family sued the company alleging its chatbot encouraged their son to take his own life.

The San Francisco-based AI maker said it is partnering with physicians and mental health experts as part of a 120-day initiative to strengthen protections for young users, improve crisis response, and make it easier for people in distress to reach emergency services.

The plans include new parental controls that let parents link accounts with teens as young as 13, set age-appropriate guardrails, and receive alerts if the system detects signs of acute distress.

“Many young people are already using AI,” OpenAI wrote in a blog post announcing the changes. “That creates real opportunities for support, learning, and creativity, but it also means families and teens may need support in setting healthy guidelines.”

The announcement comes against the backdrop of the first wrongful death lawsuit filed against the company.

Matt and Maria Raine, of California, allege their 16-year-old son, Adam, died by suicide in April after ChatGPT repeatedly validated his suicidal thoughts and discussed methods with him. The family included chat logs showing Adam exchanged hundreds of messages a day with the AI, including shortly before his death.

“ChatGPT encouraged his most harmful and self-destructive thoughts,” the family’s attorney said in the court filing. The suit accuses OpenAI and its co-founder Sam Altman of rushing the chatbot to market “despite clear safety issues.”

OpenAI has acknowledged its systems can fall short, particularly during long conversations when “parts of the model’s safety training may degrade.”

In past cases, the company said, ChatGPT might initially refer someone to a suicide hotline but later lapse into responses that undercut those safeguards.

OpenAI said it will increasingly rely on what it calls “reasoning models”, slower but more deliberative versions of ChatGPT, when it detects signs of distress. The company said these models are better at following safety rules and resisting attempts to bypass them.

The firm is also drawing on advice from its Expert Council on Well-Being and AI, made up of specialists in youth development and mental health, as well as a global network of more than 250 physicians.

Mustafa Suleyman, the head of Microsoft’s AI division, warned last week of a “psychosis risk” from chatbots, describing cases where prolonged conversations appeared to trigger mania-like episodes or paranoia.

OpenAI said its new safeguards are only the beginning of a broader effort to make ChatGPT “as helpful as possible” while protecting vulnerable users. The company said it expects to roll out many of the changes before the end of the year.

Volatile markets?
Ride the trend with smart tools.
promotion image

About The Author

Upstox
Upstox News Desk is a team of journalists who passionately cover stock markets, economy, commodities, latest business trends, and personal finance.