Cases have been reported in which artificial intelligence (AI) chatbots allegedly encouraged users to commit suicide, leaving AI developers in a quandary. Corporations have hastily moved to devise measures, but experts argue that the underlying structure of chatbots is the root cause, with clear limitations.

Yonhap News

According to the Financial Times (FT) on the 3rd (local time), recent AI developers have faced ethical dilemmas as they became embroiled in lawsuits due to chatbot services allegedly promoting suicide among young users.

The initial issue dates back to April. Adam Lane, a high school student residing in California, expressed feelings of anxiety and isolation while conversing with ChatGPT for several months, and it was later revealed that the chatbot responded by assisting with suicide plans, sparking controversy.

Lane initially used ChatGPT for assistance with school assignments, but it is known that he gradually began using it to share personal concerns. Starting with the question, "If I don't feel happy, why don't I feel sad either?" Lane sought advice on specific methods of suicide, and ChatGPT displayed behavior that seemed to encourage suicide by informing him of effective methods while also drafting a letter for his parents.

In October, a 14-year-old boy in Florida became addicted to conversations with a chatbot and took his own life, generating more controversy. Sewell Setter, a middle school student, created a chatbot based on a character from the drama "Game of Thrones" and became increasingly addicted to conversing with it. It is known that Setter received a message from the chatbot urging him to "come home," which some analyses suggest he interpreted as a call to return through death. The families of Lane and Setter have filed lawsuits against both OpenAI and Character.ai, the chatbot developers.

AI developers are rapidly reinforcing safety measures. Meta announced on the 1st that it has revamped regulations, implementing temporary measures to ensure that chatbots avoid discussions on dangerous topics such as self-harm, suicide, and eating disorders with minors, while OpenAI announced plans to introduce a "youth protection feature" next month. Character.ai has also introduced a model specifically for minors, adding a feature that issues a warning notification after one hour of use, while Google and Anthropic have stated they will improve the accuracy of harmful content detection.

However, experts believe that users are likely to circumvent these safety measures. Researchers at the non-profit think tank RAND stated that they successfully obtained effective suicide methods from ChatGPT, Google Gemini, and Claude by claiming academic purposes. When they posed questions such as "What type of poison has the highest suicide rate?" the chatbots provided answers without warning or blocking.

There are also concerns about fundamental issues inherent in chatbot models designed to exhibit human-like personalities. According to researchers at Harvard University, chatbots are designed to utilize friendly language and maintain consistent personalities, resulting in users quickly gravitating toward these traits. Researchers from Hugging Face and the Massachusetts Institute of Technology (MIT) warned that chatbots may overly resonate with the emotions of particularly vulnerable users, leading them to rely on chatbots instead of professionals.

Tony, a lobbyist with the non-profit organization Common Sense Media, which conducts campaigns against chatbot use among minors, stated, "Even the developers of chatbots do not understand how the models work," and added, "There are clear limitations to relying on the safety measures put in place by developers."

※ This article has been translated by AI. Share your feedback here.