Powered by Smartsupp

OpenAI Launches Trusted Contact Feature to Alert Loved Ones of Self-Harm Mentions in ChatGPT Conversations



By admin | May 07, 2026 | 4 min read


OpenAI Launches Trusted Contact Feature to Alert Loved Ones of Self-Harm Mentions in ChatGPT Conversations

On Thursday, OpenAI unveiled a new feature known as Trusted Contact, aimed at notifying a designated third party when conversations hint at self-harm. This tool allows an adult ChatGPT user to appoint someone—like a friend or family member—as a trusted contact within their account. If a discussion veers toward self-harm, OpenAI will prompt the user to reach out to that contact, while also sending an automated alert to the contact, urging them to check in with the user.

The company has faced a series of lawsuits from families of individuals who died by suicide after interacting with its chatbot. In several instances, these families allege that ChatGPT encouraged their loved one to harm themselves or even assisted in planning the act. Currently, OpenAI relies on a mix of automation and human oversight to address potentially dangerous incidents. Certain conversational cues trigger the system to detect suicidal thoughts, which then forwards the information to a human safety team. The company asserts that every such notification is reviewed by a person. "We strive to review these safety notifications in under one hour," OpenAI states. If the internal team determines the situation poses a serious safety risk, ChatGPT then sends an alert to the trusted contact via email, text, or in-app notification. The alert is kept brief and simply encourages the contact to check in, without revealing details of the conversation to protect user privacy, according to the company.

Image Credits:OpenAI

This Trusted Contact feature builds on safeguards introduced last September, which gave parents some oversight of their teens' accounts, including safety notifications if OpenAI's system detects a "serious safety risk." For some time, ChatGPT has also included automated prompts directing users to professional health services when conversations touch on self-harm. Crucially, Trusted Contact is optional, and even if activated on an account, users can maintain multiple ChatGPT accounts. OpenAI's parental controls are similarly optional, presenting the same limitation. "Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments," the company wrote in its announcement. "We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress."




RELATED AI TOOLS CATEGORIES AND TAGS

Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!