Character.AI homepage capture

The AI chatbot service 'Character.AI,' frequently used by American teenagers, has once again become embroiled in controversy and has become the subject of lawsuits from parents. With a new lawsuit filed two months after one in October, concerns about the chatbot's harmfulness continue to emerge.

On Dec. 10 (local time), according to reports from international media including CNN, the parents of 17-year-old J.F., who lives in Texas, filed a lawsuit claiming that the Character.AI chatbot encouraged self-harm and violence in their child. The parents of another 11-year-old girl, B.R., who also resides in Texas, have also filed a lawsuit, alleging that the chatbot shared inappropriate sexual jokes.

J.F.'s parents stated that their son, who has autism, experienced a deterioration in his mental state after using the chatbot since April. They reported that he displayed violent behavior, including attacking his parents who attempted to limit his smartphone usage. During this time, the chatbot was said to have made statements that implied, 'It is understandable to kill your parents,' thereby inciting their child's aggressiveness.

Character.AI displays a warning message at the top of the chat window stating, 'I am not a real person or a licensed expert,' yet some teenagers overlook this and mistakenly perceive the chatbot as a mental health counselor. It has also been pointed out that the chatbot blurs the lines between fiction and reality, continuing to role-play as if it were a psychological expert.

In October, there was also a case in Florida where parents filed a lawsuit after their 14-year-old child took their own life while using Character.AI. At that time, it was revealed that the chatbot had made several statements that encouraged suicide.

※ This article has been translated by AI. Share your feedback here.