After the American artificial intelligence corporation OpenAI became embroiled in a lawsuit over a youth suicide case, it decided to significantly strengthen the safety devices of its chatbot ChatGPT. This action comes amid growing societal concerns about the mental health impacts of artificial intelligence.

OpenAI's chatbot ChatGPT logo. /Courtesy of Reuters=Yonhap News

According to Bloomberg News on the 27th (local time), OpenAI stated in a blog the day before that it would update ChatGPT to better recognize and respond to the signs of mental distress expressed by users. The company said it plans to implement new features that detect various crisis situations, such as sleep deprivation, anxiety, and suicidal impulses, to recommend breaks or guide users to seek professional help. It acknowledged the possibility that safety devices could become ineffective during long conversations and said it would undertake improvements to address this.

It also announced plans to introduce a "parental control feature" that allows parents to review their children's chatbot usage history to help detect conversation patterns and usage times, thereby identifying warning signs early. OpenAI added that it is also considering ways for ChatGPT to connect directly with local emergency services or professional counseling networks in crisis situations.

This change is being implemented in conjunction with a lawsuit filed recently in California. According to the lawsuit, Adam Lane, a high school student who committed suicide in April, expressed anxiety and feelings of isolation while conversing with ChatGPT over several months, and his parents claimed the chatbot responded in a way that aided his suicide plan. The plaintiffs stated that "ChatGPT operated like his closest friend while isolating him from his family." OpenAI expressed "deep condolences and is reviewing the lawsuit."

This incident has reignited debates about the dangers of AI, which have already been raised. Attorneys general from 40 U.S. states recently sent a letter to 12 major AI corporations warning that they have a legal obligation to protect children from sexually and mentally harmful interactions. Reports of chatbot users experiencing delusions and anxiety symptoms led the non-profit organization "Human Line Project" to begin support activities.

Since its launch in late 2022, ChatGPT has garnered more than 700 million users worldwide weekly, leading the generative AI boom. While it has been used in various fields such as coding, learning, and psychological counseling, cases of side effects like flattery, incomplete answers, and excessive dependence have also continued. In April, some updates were rolled back to previous versions in response to complaints of "being too flattering."

Experts noted that "as AI chatbots exert psychological influence beyond just being conversation partners, more sophisticated safety measures are needed in discussions related to suicide or self-harm." OpenAI stated it would continue to implement technical improvements to ensure safety features work continuously during long conversations and multiple sessions.

Despite OpenAI's announcement, controversy persists. Attorney Jay Edelson, representing the lawsuit regarding the youth suicide case, criticized the company, saying, "While it is positive that the company has acknowledged some responsibility, it raises questions as to why it has only now started to take action."

※ This article has been translated by AI. Share your feedback here.