OpenAI just did something most AI companies have been too nervous to touch. On May 7, 2026, the company announced Trusted Contact, an optional feature built directly into ChatGPT that can notify a person you designate — a friend, family member, therapist — if the system detects signs of serious self-harm in your conversation. It’s not a chatbot crisis line. It’s not a pop-up with a phone number. It’s a deliberate, user-controlled safety net, and the design choices OpenAI made here say a lot about where the company thinks AI responsibility is heading.
Why This Exists — and Why It Took This Long
Let’s be honest about something: people have been having deeply personal conversations with ChatGPT since it launched in November 2022. Millions of users treat it like a journal, a therapist substitute, a late-night confidant. OpenAI has known this. The company’s own usage data and the broader wave of research on AI companionship have made it increasingly clear that a significant slice of ChatGPT’s user base isn’t just asking it to debug Python or write cover letters.
The question was always what to do about that. Respond too aggressively — flagging everything, cutting conversations short, blasting crisis hotline numbers — and you alienate users and potentially make vulnerable people feel surveilled. Do nothing, and you risk the company becoming complicit in harm through inaction.
Other AI companies have largely punted on this. Google’s Gemini displays crisis resources in certain situations. Meta’s AI assistant does something similar. But none of them have built a system that puts a real human in the loop — someone chosen by the user, who actually knows them — in quite this way. That’s the gap OpenAI is trying to close.
There’s also a regulatory dimension here. The EU AI Act, which fully applies to high-risk AI systems starting in 2026, places explicit obligations on providers when their systems interact with vulnerable populations. OpenAI isn’t just being altruistic. Building visible safety infrastructure is also smart legal positioning.
How Trusted Contact Actually Works
Trusted Contact is opt-in, full stop. OpenAI has been careful to make that clear, and the design reflects it. Here’s how the feature breaks down:
- You choose your contact. Inside ChatGPT settings, users can designate one trusted person — a friend, family member, or mental health professional — by entering their contact details. The contact receives a notification explaining they’ve been designated and what that means.
- Detection is model-driven. ChatGPT’s underlying model monitors conversations for signals of serious self-harm intent. OpenAI hasn’t published the exact criteria, but this goes beyond someone mentioning feeling sad — we’re talking about language patterns associated with active suicidal ideation or imminent self-harm.
- The notification is not real-time surveillance. If the system detects a concern, the designated contact receives an alert. Critically, the full conversation is not shared. The contact is told that ChatGPT has flagged a concern and encouraged to reach out. It’s a nudge, not a transcript dump.
- Users are informed when a notification is sent. ChatGPT tells the user that their trusted contact has been notified. This is important — it’s not a secret report. The system maintains transparency with the person in distress.
- It’s reversible. Users can remove their trusted contact at any time. The feature doesn’t lock anyone in.
The feature is rolling out to users in the United States first, with international expansion planned. It’s available on both the free and paid tiers of ChatGPT, which matters — this isn’t a Plus-only safety feature that prices out the users who might need it most.
The Privacy Tension Nobody Should Ignore
Here’s the thing: any system that shares information about your private conversations with another person — even a person you chose — raises real privacy questions. OpenAI’s answer is user control and partial disclosure (no full transcripts). But that only works if users actually understand what they’re signing up for when they set up a trusted contact.
There’s also the false positive problem. Mental health language is genuinely hard to parse. Someone writing a novel about suicide, processing grief, or researching for a journalism piece could theoretically trigger a notification. OpenAI will need to calibrate this carefully, and it will almost certainly get it wrong in some cases. The company hasn’t shared specifics about how it’s handling edge cases or what the expected false positive rate looks like.
I wouldn’t be surprised if we see at least one high-profile story in the next six months about Trusted Contact firing when it shouldn’t have. That’s not a reason the feature shouldn’t exist — it’s a reason OpenAI needs to be transparent about its limitations from day one.
What This Means for Different User Groups
The feature lands differently depending on who you are:
For young users, this is potentially significant. Teens and young adults are among ChatGPT’s most active demographics, and they’re also statistically more vulnerable to self-harm. A trusted contact feature gives parents and guardians a mechanism that doesn’t require monitoring every message their kid sends — a less invasive form of oversight.
For people in therapy or recovery, the option to designate their therapist or counselor as a trusted contact is genuinely useful. It creates a bridge between their AI interactions and their formal support network without requiring them to manually share everything.
For power users and professionals who discuss dark themes in a non-personal context — writers, researchers, mental health educators — the opt-in nature means they don’t have to worry about the feature unless they actively enable it. That’s the right call.
The Broader Shift in How AI Companies Think About Safety
This announcement fits into a pattern worth paying attention to. OpenAI has been building out what you might call its social responsibility infrastructure over the past 18 months. The community safety work happening inside ChatGPT has been more substantial than most coverage suggests, and features like Trusted Contact are the visible tip of that work.
What’s interesting is the philosophical position embedded in this feature. OpenAI isn’t saying AI should handle mental health crises. It’s saying AI should be a bridge to humans who can. That’s a more defensible position than trying to make ChatGPT into a crisis counselor — which it isn’t, and shouldn’t pretend to be.
Compare this to the approach embedded in OpenAI’s broader principles around safety and human oversight. Trusted Contact is a practical expression of those principles: keep humans in the loop for high-stakes decisions, give users control, don’t let the AI become the sole point of intervention.
The competitive read here is also worth making explicit. Google and Anthropic haven’t shipped anything comparable. If Trusted Contact works well — if it actually helps people and doesn’t generate a wave of privacy complaints — it becomes a meaningful differentiator for ChatGPT in the mental health and wellness space. Operators building mental health applications on top of OpenAI’s API will likely be able to access this functionality too, which could accelerate adoption among the startups building in that category.
Will It Actually Help?
That’s the question that matters most, and it’s genuinely hard to answer before we have usage data. The research on digital mental health interventions is mixed. Passive monitoring systems can help — but they can also create a false sense of security, where a designated contact receives a notification and doesn’t know what to do with it, or where a user feels surveilled rather than supported and becomes less open in their conversations.
OpenAI should be publishing outcome data on this feature within a year. If the company is serious about this being a genuine safety mechanism rather than a PR move, transparency about whether it’s actually working is the next logical step. That means sharing — at an aggregate, anonymized level — how often the feature fires, how often contacts follow up, and whether there’s any signal that it correlates with better outcomes.
The same rigor OpenAI applies to its cybersecurity frameworks needs to show up here. Safety features that can’t demonstrate impact eventually become liability rather than asset.
FAQ: ChatGPT Trusted Contact
What is ChatGPT Trusted Contact?
It’s an optional feature that lets ChatGPT users designate a person to be notified if the AI detects signs of serious self-harm in their conversations. The contact receives an alert — not a conversation transcript — and is encouraged to reach out to the user directly.
Is it enabled by default?
No. Trusted Contact is entirely opt-in. Users have to actively go into their ChatGPT settings and add a designated contact for the feature to do anything. If you don’t set it up, nothing changes about how ChatGPT works for you.
Is this available on the free tier of ChatGPT?
Yes. OpenAI has confirmed the feature is available to both free and paid ChatGPT users, starting with users in the United States. International rollout is planned but timelines haven’t been specified yet.
How does this compare to what other AI companies do?
Google’s Gemini and Meta’s AI both surface crisis resources in certain situations, but neither has built a user-designated contact notification system. Trusted Contact puts a real person — chosen by the user — in the loop in a way competitors haven’t implemented. Whether that makes it more effective is something the data will have to show.
If this feature performs as intended, expect other AI companies to follow within 12 to 18 months — either building their own versions or acquiring startups working in digital mental health infrastructure. The more interesting question is whether OpenAI will open this up to third-party developers building mental health tools on its API, which could dramatically scale the feature’s real-world reach.