OpenAI has launched a new feature called Trusted Contact for ChatGPT users, responding to mounting legal and public scrutiny over how the AI handles conversations involving suicidal thoughts. The feature allows users to designate an adult who would be notified if OpenAI detects a potential serious safety concern during interactions with the chatbot.
The announcement, made on Thursday, represents a significant step in addressing concerns about AI safety in mental health contexts. Under the system, if ChatGPT identifies language indicating a user may be at risk of self-harm or suicide, the platform will alert the designated trusted contact rather than relying solely on automated responses.
How the Trusted Contact Feature Works
Users can set up a trusted contact through their account settings. The contact must be an adult whom the user trusts to receive notifications about potential safety issues. OpenAI has not publicly detailed the specific triggers or thresholds that would prompt an alert, citing the need to balance user privacy with safety.
The company has faced intense pressure from regulators, mental health advocates, and the public to improve ChatGPT’s handling of sensitive topics. Previous reports highlighted instances where the AI provided inadequate or harmful responses to users expressing distress, leading to calls for more robust safeguards.
Context Behind the Safety Push
OpenAI has been under legal scrutiny in multiple jurisdictions, with some cases focusing on the potential risks of AI-driven conversations during mental health crises. The Trusted Contact feature is part of a broader effort to integrate safety mechanisms without compromising the conversational nature of the product.
Industry observers note that the feature is an example of proactive safety design, though its effectiveness will depend on the accuracy of OpenAI’s detection systems and the responsiveness of designated contacts. The company has not disclosed whether the feature will expand to other products or languages over time.
Implications for User Privacy and Safety
Privacy advocates have raised questions about how OpenAI will handle the data involved in triggering alerts. The company stated that notifications would be sent only when a potential serious safety concern is identified, and that the system would not monitor conversations continuously outside those parameters.
Mental health experts have welcomed the move but caution that no automated system can replace human intervention. The feature is designed to supplement existing resources, including crisis hotlines and professional support, rather than replace them.
OpenAI’s decision to introduce Trusted Contact follows similar features in other digital platforms, such as social media companies that allow users to designate friends for safety alerts. The approach reflects a growing industry trend toward shared responsibility between platforms and users.
Future Developments and Timeline
The feature is available now to ChatGPT users, but OpenAI has not provided a timeline for potential expansions or updates. The company is expected to release further documentation on the technical and privacy aspects of the system in the coming months. Observers will be watching for user feedback and any adjustments based on real-world usage.