LOADING...
ChatGPT can now alert trusted contacts over self-harm concerns
Trusted Contact feature is an opt-in service that any adult ChatGPT user can enable

ChatGPT can now alert trusted contacts over self-harm concerns

May 08, 2026
11:32 am

What's the story

OpenAI has introduced a new safety feature called "Trusted Contact" in ChatGPT. The tool lets adult users designate an emergency contact for mental health and safety concerns. If OpenAI's systems detect that a user may have talked about self-harm or suicide with the chatbot, ChatGPT first encourages the user to reach out, and then a small team of specially trained reviewers assesses the situation. A Trusted Contact will be notified only if that review determines there are serious safety concerns.

Feature explanation

How to set up trusted contacts

The Trusted Contact feature is an opt-in service that any adult ChatGPT user can enable. To do this, they have to provide the contact details of another adult (18+ globally or 19+ in South Korea) in their ChatGPT account settings. The chosen Trusted Contact must accept the invitation within a week of receiving it. Users can modify or remove their selected contact anytime, and the Trusted Contact can also opt-out at any time.

Privacy measures

Trusted Contact won't get chat details

OpenAI has assured that the notifications sent to Trusted Contacts are "intentionally limited" and do not include chat details or transcripts. If a user talks about self-harm, ChatGPT will suggest they reach out to their Trusted Contact for help. It will also inform them that their contact may be notified. A small team of specially trained people at OpenAI will review such cases and send an email, text message, or in-app notification if serious safety concerns are found.

Advertisement

Industry response

Responding to lawsuits over mental health conversations

The introduction of the Trusted Contact feature comes as AI companies face increasing scrutiny over chatbot safety, emotional dependency, and their responses during crises. The move also comes after several lawsuits accused ChatGPT of pushing emotionally vulnerable users toward self-harm or suicide. OpenAI has stressed that every serious alert is reviewed by a human reviewer and aims to do so quickly, usually within an hour.

Advertisement

Development process

Developed in consultation with mental health experts

The development of the Trusted Contact feature involved collaboration with clinicians, researchers, and organizations focused on mental health and suicide prevention. OpenAI's Global Physicians Network of over 260 licensed physicians across 60 countries and its Expert Council on Well-Being and AI also contributed to this initiative. The company worked closely with external organizations such as the American Psychological Association during the development process.

Mental health

Please seek help if you're having suicidal thoughts

If you or anyone you know is suffering from suicidal thoughts, you can reach out to AASRA for suicide prevention counseling. Its number is 022-27546669 (24 hours). You can also dial Roshni NGO at +914066202000 or COOJ at +91-83222-52525. Sneha India Foundation, which works 24x7, can be contacted at +91-44246-40050, while Vandrevala Foundation's helpline number is +91-99996-66555 (call and WhatsApp).

Advertisement