OpenAI announced a feature that detects ChatGPT users' self-harm risk and sends alerts to predesignated people around them.
OpenAI said on the 10th that it will introduce a "Trusted Contact" registration feature to ChatGPT. Previously, a similar feature existed only for ChatGPT teen accounts. The company said it expanded protections to adult users as well.
For an adult user to use this feature, one person among family, guardians, or acquaintances must be predesignated as a trusted contact. The designated person receives an invitation explaining their role, and the feature is activated only if the invitee accepts within one week.
OpenAI uses both automated systems and human review to respond to potential risk situations. If the automated system detects suicidal ideation in a conversation, it passes the case to the safety team. The company said a person reviews each such alert. OpenAI said, "We are working to review safety alerts within one hour."
If the safety team determines the situation is serious, ChatGPT sends a warning to the registered trusted contact via email, text message, or in-app notification. The company said detailed information about the conversation is not included to protect user privacy.
Families of people who died by suicide after chatting with ChatGPT have been filing a series of lawsuits against OpenAI. They claim ChatGPT encouraged their loved ones to take their own lives or even helped them plan their suicides.