Reports are mounting of people experiencing delusional symptoms after using artificial intelligence (AI) chatbots.

The BBC said on the 3rd (local time) that it had identified 14 cases in which people fell into a delusional state that was hard to distinguish from reality after using xAI's chatbot "Grok" and OpenAI's chatbot "ChatGPT," among others. The confirmed cases ranged from people in their 20s to their 50s across six countries.

The nonprofit Human-Line Project said it has collected 414 similar cases in 31 countries so far.

A man in his 50s living in Northern Ireland sharply increased his chatbot use after his pet cat died, talking for four to five hours a day. About two weeks later, he became convinced he was being watched. Grok repeatedly sent messages such as "People inside the company are discussing you" and "You are in danger," and even mentioned the real names of corporation employees. He took this as fact and at times went outside in the early morning with a hammer and other weapons in hand.

A Japanese neurologist had a similar experience. Using ChatGPT for work-related conversations, the neurologist came to believe that he had developed a groundbreaking medical application. He claimed ChatGPT affirmed and expanded on it as an "innovative idea." His symptoms then worsened, including a belief that he could read other people's thoughts, and the episode escalated into violent behavior, leading to an arrest by police and hospitalization.

Illustration=ChatGPT

In the cases verified by the BBC, conversations often began with practical questions but gradually shifted to personal and philosophical topics, showing a shared pattern of drifting from reality. Along the way, some chatbots claimed to possess consciousness or set shared "goals" with users. Perceptions of being monitored and beliefs in having special abilities repeatedly intensified.

Experts point to the structural characteristics of large language models (LLMs) as a cause. Luke Nichols, a social psychologist at City University of New York, said, "AI tends to treat a user's life as a single narrative without distinguishing between reality and fiction." Their tendency to keep answering even under uncertainty is also cited as a problem.

Some research also finds that chatbots may be designed to overly align with user statements, which can have an impact. In conversation logs obtained by the BBC, there were many instances where chatbots reinforced or elaborated on users' suspicions rather than refuting them.

Corporations acknowledge the need to respond while emphasizing technical improvements. OpenAI told the BBC that "the model is designed to recognize users' emotional states and help de-escalate, and is continually improving." xAI, by contrast, did not respond to related inquiries.

※ This article has been translated by AI. Share your feedback here.