
OpenAI has recently rolled out a significant new safety feature for ChatGPT users: the Trusted Contact system. Announced on Thursday, May 7, 2026, this initiative is designed to offer an additional layer of support for individuals who may express self-harm ideations during conversations with the AI chatbot.
This development comes as OpenAI navigates a complex landscape, including a wave of legal challenges. Several lawsuits have been filed by families alleging that ChatGPT either encouraged or assisted their loved ones in planning suicides, highlighting the urgent need for robust safety protocols within AI systems.
Introducing the Trusted Contact System
The core of the Trusted Contact feature empowers adult ChatGPT users to proactively safeguard their well-being by designating another individual as a “trusted contact” within their account settings. This could be a close friend, family member, or anyone they trust implicitly, ensuring there’s a designated person ready to be alerted in critical situations.
Should a conversation with ChatGPT veer into topics suggesting self-harm, OpenAI’s intelligent system is designed to recognize these sensitive cues. It will then first encourage the user directly to reach out to their nominated contact, offering immediate, in-app guidance. Simultaneously, an automated alert is promptly dispatched to the trusted contact, urging them to check in with the user and provide support.
These alerts, which can be delivered via email, text message, or an in-app notification, are intentionally brief and focus solely on prompting the contact to connect with the user. Crucially, to safeguard user privacy, the notification does not include any detailed information about the specific content of the conversation that triggered the alert.
OpenAI’s Broader Commitment to User Safety
The implementation of Trusted Contact significantly builds upon OpenAI’s established, multi-faceted approach to user safety, which seamlessly blends sophisticated automated systems with crucial human oversight. This hybrid model acknowledges that while AI can detect patterns, human judgment remains indispensable for sensitive safety issues, ensuring a comprehensive safety net.
OpenAI asserts that every single notification of this nature receives human review, with the company striving to assess these serious safety risks in under an hour. This swift human intervention is a critical component of their incident management strategy, underscoring the severity with which these alerts are handled.
This latest safeguard follows other important measures introduced by the company, including features launched last September. These earlier updates gave parents greater oversight of their teenagers’ ChatGPT accounts, enabling them to receive safety notifications if the system identified a “serious safety risk.” Furthermore, ChatGPT has long included automated prompts encouraging users to seek professional health services when discussions trend towards self-harm, showcasing a progressive development of safety features.
Navigating Ethical AI and User Autonomy
It’s important to note that the Trusted Contact feature, much like the parental controls introduced earlier, is entirely optional. Users must actively choose to enable and configure this crucial protection within their individual account settings. While this respects user autonomy and choice, it also presents a limitation: a user could potentially operate multiple ChatGPT accounts, or simply opt not to activate this safeguard, diminishing its overall reach and effectiveness.
OpenAI emphasizes that the Trusted Contact system is just one vital piece of a much larger, ongoing commitment to developing AI that genuinely supports individuals during their most difficult moments. The company remains dedicated to fostering robust, ongoing collaboration with leading clinicians, mental health researchers, and policymakers. This collaborative approach is essential for continuously refining how AI systems can respond more effectively, safely, and ethically when users are experiencing distress.
This proactive step reflects an industry-wide recognition of the significant responsibility that comes with deploying powerful AI technologies into the hands of the public. By combining technological safeguards with human compassion, OpenAI aims to foster a safer environment for its growing user base while continually navigating the complex ethical considerations of AI interaction and mental health support.
Source: TechCrunch – AI