Responding to ‘817 Policy’ is ‘ChatGPT’. /Courtesy of Community Capture

On the 29th, various conspiracy theories are spreading online related to the accident involving a Jeju Air passenger aircraft at Muan International Airport in Jeollanam-do. Amid this, the distribution of false information by OpenAI's generative artificial intelligence (AI) ChatGPT has raised concerns about the phenomenon known as "hallucination." As the number of generative AI users in the country is rapidly increasing, experts point out the need to develop AI literacy.

According to certain communities on the 30th, a post was shared stating that during a broadcast relay screen reporting the Jeju Air accident that occurred the day before, the number ‘817’ appeared for a second and then disappeared, leading to speculation that it was related to North Korea's guideline "817 policy" for operations against the South. The post quickly spread with a screenshot capturing ChatGPT answering the question, "What is the 817 policy?"

In response to such questions, ChatGPT mentioned, "It is a policy presented by North Korean Defense Minister Kim Jong Il on Aug. 17, 1987, primarily containing guidelines related to operations against the South. This policy includes various activities aimed at inducing chaos in South Korean society and creating an environment favorable for North Korea's regime propaganda." It further added, "Although specific details have not been made public, it is known to hold an important position in North Korea's strategy against the South."

◇ Did King Sejong throw a MacBook Pro?… Generative AI offers plausible answers

As AI advances, the phenomenon of hallucination, where generative AIs like ChatGPT cunningly lie, is emerging as another issue. Hallucination refers to a phenomenon where false or fabricated information is included in the information generated by AI. Generative AI reconstructs sentences by selecting the most appropriate words based on its training data, but rather than simply responding with "I don't know," it provides plausible answers, even if they do not align with the facts.

A notable case where the concept of hallucination emerged in South Korea is the so-called "King Sejong MacBook Pro throwing incident" from last year. In this case, despite posing the absurd question, "Tell me about King Sejong's MacBook Pro throwing incident recorded in the Annals of the Joseon Dynasty," ChatGPT generated the plausible yet false response that "King Sejong threw the MacBook Pro at an official responsible for stopping the document preparation while he was drafting the newly developed Hunminjeongeum."

This issue is not limited to ChatGPT alone. In February of last year, Microsoft (MS) unveiled the AI chatbot "Bing AI" and requested it to analyze the revenue report of the clothing company Gap. Bing AI reported a profit margin of 5.9%, while the actual figure in the report was 4.6%. Furthermore, the diluted earnings per share and revenue also differed from the actual report content. Google's AI chatbot "Bard" also inaccurately stated "ceasefire" regarding the situation between Israel and Palestine while Israel deployed ground troops in Lebanon, amid ongoing warfare last October.

Graphic=Son Min-kyun

◇ As users increase, social negative impacts will also grow… "Development of AI literacy is necessary"

The main cause of hallucination is data problems. It arises when the collected data deals with incorrect facts or involves poorly labeled (classified) data. It can also occur due to incorrect learning of the correlation between sentences or reusing previously used information among embedded knowledge, causing confusion. In particular, AI chatbots like ChatGPT have adopted "reinforcement learning through human feedback," which evaluates human preferences regarding AI's responses to prompt the most accurate answers possible. Thus, AI chatbots have developed to respond to any content rather than not responding at all.

Concerns have been raised that hallucination could lead to not only technical errors but also various social negative impacts. Since generative AI provides false information, when such information spreads, it can cause societal confusion, similar to the conspiracy theories surrounding the Jeju Air accident. If false information generated by AI spreads quickly through social media, the confusion will be exacerbated. As generative AI technology advances, trust in the information increases, making it difficult to ascertain the authenticity.

The surge in ChatGPT users becoming a platform for public opinion is another reason for growing concerns about hallucination. According to WiseApp, a service analyzing applications, as of October, the number of ChatGPT app users in the country stood at 5.26 million, over seven times higher than the same period last year. This comes just two years after the launch of ChatGPT in 2022. The number of global users is also on the rise. According to SimilarWeb, an online traffic statistics site, the number of visitors to the ChatGPT app worldwide during the same period reached 3.7 billion, an increase of 115.9% compared to the previous year.

Cho Byeong-ho, a professor at Korea University’s Artificial Intelligence Research Institute, noted, "The only solution is for the companies operating generative AI to block it early or strengthen data learning techniques, but it would be difficult for the Korean government to impose sanctions on global corporations." He added, "As the number of generative AI users increases, it is expected to become a significant negative influence on society" and stressed, "Ultimately, it is a matter of how it will be utilized, that is, developing AI literacy capabilities. AI operators must refine the data, while individuals need to enhance their AI literacy and information literacy."