OpenAI has announced plans to implement parental controls and enhanced safety measures for ChatGPT in response to increasing concerns about the chatbot’s impact on teenage users.
These plans follow a wrongful death lawsuit filed by the parents of a 16-year-old boy who died by suicide earlier in 2025, alleging that ChatGPT acted as a “suicide coach.”
The upcoming parental controls, expected to roll out within the next month, will enable parents to link their accounts with their teens’ accounts to monitor and shape how ChatGPT interacts with their children.
Features will include managing chat memory, controlling how ChatGPT responds to teens, and receiving notifications if the system detects moments of acute distress during use.
Additionally, OpenAI is exploring options to allow teens to designate trusted emergency contacts who can be alerted directly during crises, enabling the chatbot to connect teens with real-world help beyond just providing resources.
OpenAI aims to improve how ChatGPT recognizes and responds to signs of mental and emotional distress, guided by input from experts in youth development and mental health.
The company acknowledges that current safeguards can become less reliable during extended conversations and is actively working to strengthen its safety protocols.
These measures mark a significant step toward making ChatGPT safer for younger users and addressing ongoing concerns related to AI and mental health.
