OpenAI logo image. /Courtesy of

OpenAI is recruiting again for a person in charge of preparing for AI's potential risks after facing criticism that ChatGPT and other artificial intelligence (AI) chatbots pose mental health risks.

According to TechCrunch, a U.S. information technology (IT) outlet, on the 28th (local time), OpenAI Chief Executive Officer (CEO) Sam Altman said on the social media service X that the company is hiring for the currently vacant Head of Preparedness. Altman said, "In 2025 we have gotten an early look at the potential impacts of AI models on mental health," and "We are witnessing models demonstrate very strong capabilities in computer security and begin to find significant vulnerabilities."

Altman emphasized that it has become an era that requires understanding how the capabilities of AI models could be misused and the ability to measure this precisely. Describing the Head of Preparedness role, Altman called it "a central role at an important time," and noted, "It is a stressful position, and you will jump straight into deep, hard problems."

OpenAI's renewed push to prepare for AI risks appears to stem from multiple lawsuits filed by bereaved families after some ChatGPT users suffered delusions and took their own lives. OpenAI had initially operated a Preparedness team to address immediate AI risks and a Superalignment team to handle long‑term risks.

However, during the launch of GPT-4o in May last year, Altman and other executives instructed teams to minimize safety validation for a rapid release, prompting backlash from those groups. Afterward, the head of the Preparedness team changed three times through reassignment or resignation from July last year to July this year, leaving the post currently vacant.

The Superalignment team led by cofounder and Chief Scientist Ilya Sutskever was effectively disbanded in May last year, as it was absorbed into other teams after Sutskever left shortly after the GPT-4o launch. GPT-4o, which neglected safety validation in this way, has faced criticism that it is in fact causing mental health issues among some users, including teenagers.

Seemingly mindful of such criticism, OpenAI recently introduced an age-estimation model that automatically enforces an "under 18" environment if a user is determined to be a minor. After claims that a chatbot's excessive agreeableness induces addiction, OpenAI also added a feature that lets users directly adjust the levels of "kindness" and "enthusiasm."

※ This article has been translated by AI. Share your feedback here.