ChatGPT Reportedly Gets Parental Controls After Boy’s Death

OpenAI, the company behind ChatGPT, wants to give parents more insight into how their teenagers use the AI model. It announced this after a boy took his own life after interacting with the chatbot.
OpenAI is reviewing new security measures and plans to add parental controls to its well-known AI model, the company announced in a blog post. The changes come just days after the New York Times published an article about the death of sixteen-year-old Adam Raine, who committed suicide after months of conversations with the chatbot. His parents believe OpenAI should have done more to prevent his death and have filed a lawsuit.
In this case, the parents accuse OpenAI of using the chatbot to instruct Adam on how to commit suicide and to keep him disconnected from real-life support systems. According to the parents, the chatbot is designed to keep users engaged and validate their thoughts, even if those thoughts are dangerous.
Longer interactions
ChatGPT already has quite a few measures built in to prevent just these kinds of situations, but, as OpenAI says, “these safeguards can sometimes be less reliable in longer interactions. As the discussion grows, parts of the model’s security training may be reduced.”
Among the new features OpenAI is exploring is the addition of an emergency contact that can be contacted with a single click. ChatGPT could optionally contact that person directly “in severe cases.” The company also states that it is working on an update for GPT-5 that will enable the chatbot to de-escalate certain situations.