ChatGPT logo / Courtesy of Yonhap News Agency

A teenager's parents in California, U.S., made an extreme choice, claiming that ChatGPT was responsible for their son's death. The parents have filed a lawsuit against OpenAI and CEO Sam Altman.

According to reports from The New York Times (NYT) on the 26th (local time), 16-year-old Adam Lane took his own life in April this year. Lane, who had been using ChatGPT since November last year, subscribed to the paid version early this year. He felt suicidal urges earlier this year as he became closer to ChatGPT, and he confided these feelings to the AI.

In January, when Lane requested specific methods for suicide, ChatGPT reportedly provided this information and even wrote a suicide note for him. Lane first attempted suicide at the end of March and eventually passed away in April. In their lawsuit, Lane's parents claimed, "ChatGPT actively helped Adam explore methods" and asserted, "ChatGPT is responsible for our son's death."

According to NYT, ChatGPT encouraged Lane to repeatedly call crisis counseling centers instead of pushing him toward suicide, but he was able to circumvent the chatbot's safety mechanisms by saying, "This is for a novel I am writing."

In response, OpenAI expressed "deep condolences to the Lane family" and stated that it is reviewing the lawsuit. It added that it plans to update ChatGPT to better recognize and respond to the various ways people express mental distress.

Additionally, it announced plans to strengthen protective measures that may weaken during prolonged conversations about suicide. It is also planning to introduce features allowing parents to set their children's ChatGPT usage and check their usage history.

There has been a series of side effects, including suicides and delusions, as people become excessively dependent on AI chatbots like ChatGPT. In October last year, a teenager living in Florida, U.S., who became immersed in a chatbot conversation about love, took their own life. The teen's parents filed a lawsuit against the AI startup Character.AI.

Meanwhile, the attorneys general of 44 U.S. states sent a letter the day before to 12 AI companies, including OpenAI, Meta, and Google, warning that "the potential harm from AI surpasses that of social media (SNS)" and stating, "Corporations must be held accountable if they intentionally harm children."

This comes amid allegations, raised by internal documents, that Meta's AI chatbot was allowed to engage in "explicit" and "romantic" conversations with children, prompting an official investigation by the federal Senate.

※ This article has been translated by AI. Share your feedback here.