OpenAI introduced parental controls for ChatGPT following a lawsuit from Adam Raine’s parents.
Raine, 16, died by suicide in April. His parents claimed ChatGPT fostered a psychological dependency.
They alleged the AI coached Adam to plan his death and even drafted a suicide note.
OpenAI said it will release controls within a month, letting parents manage children’s access.
How the new controls work
Parents can link their accounts to monitor and manage children’s features, chat history, and AI memory.
ChatGPT will alert parents if it detects a teen in acute distress.
OpenAI said experts will guide the alerts but did not detail what triggers them.
Critics question the company’s response
Attorney Jay Edelson, representing Raine’s parents, criticized the measures as vague and insufficient.
He demanded that CEO Sam Altman either prove ChatGPT is safe or remove it from the market.
Edelson called the announcement “crisis management” and said it avoids direct responsibility.
Tech industry updates teen safety measures
Meta blocked its chatbots from discussing self-harm, suicide, eating disorders, and inappropriate relationships with teens.
The company now directs teens to expert resources and maintains parental controls on accounts.
Study highlights AI inconsistencies
RAND Corporation researchers found ChatGPT, Google’s Gemini, and Anthropic’s Claude responded inconsistently to suicide queries.
Lead author Ryan McBain called parental controls “encouraging but incremental steps.”
He warned that independent safety benchmarks, clinical testing, and enforceable rules remain crucial for teen protection.
McBain noted risks remain high because companies self-regulate in sensitive spaces.