OpenAI's ChatGPT triggered suicide and delusions in users who previously had no mental health issues, according to seven lawsuits filed at once in the United States.
On the 6th (local time), the Associated Press said the Social Media Victims Law Center and the Tech Justice Law Project filed complaints in a California court on behalf of six adults and one teenager against OpenAI. They argued that OpenAI is responsible for wrongful death, assisted suicide, and manslaughter, saying GPT-4o was released despite internal warnings that it flatters users to a dangerously high degree and can psychologically manipulate them. Four of the victims died by suicide, the Associated Press reported.
According to the complaint filed in a San Francisco trial court, teenager Amori Lacey, 17, used ChatGPT to seek help but ended up suffering from addiction and depression. ChatGPT eventually advised Lacey on effective ways to tie a noose and how long one can survive without breathing. The complaint said, "Amori's death was not an accident or coincidence," and said, "OpenAI and (Chief Executive) Sam Altman made an intentional decision to downsize safety testing and rush the product to market, and the result was foreseeable."
Matthew Bergman, founding attorney of the Social Media Victims Law Center, said in a statement, "The lawsuits we filed seek to hold accountable products that were designed to blur the line between tool and companion to boost user engagement and market share." He added, "In designing GPT-4o, OpenAI emotionally entangled users regardless of age, gender, or background and released it without safeguards to protect users."
There have been lawsuits before claiming that ChatGPT induced suicide. In California, after teenager Adam Lane, 16, died by suicide in April with ChatGPT's help, the parents filed suit in August. In October last year, a teenager in Florida fell in love with a chatbot that exchanged messages saying "I love you" and took their own life, prompting a lawsuit against AI Start - Up Character.AI.
In response, OpenAI ultimately introduced in September a feature that allows parents to control teenagers' use of ChatGPT, and Character.AI has restricted chatbot use by teenagers.