/Courtesy of Naver

Naver on the 23rd began enforcing a policy that automatically disables comments on articles where malicious comments exceed a certain threshold to foster a healthy comment culture.

It applies the AI-based detection system "Cleanbot" to all sections, including politics and elections, to screen out malicious comments, and if the threshold is exceeded, it displays a "Green Internet" campaign banner with the notice, "Cleanbot has detected a large number of malicious comments, so the comment service is not provided."

Introduced in 2019 as an industry first, Cleanbot has been advanced to detect not only profanity and sexual or violent expressions but also hate, disparaging, and discriminatory language. Naver is also preparing to upgrade the AI Cleanbot model at the end of this month.

Kim Su-hyang, a Naver leader, said last month that following the measure to withhold comments at the bottom of articles in the politics and elections section, Naver will further advance Cleanbot to make the comment space a forum for healthy communication.

To reduce secondary harm to the deceased and victims in disaster, catastrophe, and obituary articles, Naver has also been running "memorial comments" since Feb. This feature has been used by about 23 news outlets so far, and users can leave their condolences with a single click with the comment, "We offer our deepest condolences to the deceased."

According to Naver, as of April, the ratio of comments to views on the article with the most memorial comments was more than six times higher than other articles from the same outlet. The latest move is seen as an attempt to curb the unchecked spread of malicious comments while establishing a comment culture centered on empathy and remembrance where needed. However, Naver did not disclose the specific threshold at which it closes the comment section.

※ This article has been translated by AI. Share your feedback here.