Kim Myeong-ju, head of the AI Safety Research Institute./Courtesy of Ahn Sang-hee

"If the United States scores 100 and China 85 in current artificial intelligence (AI) technology and use, the rest of the countries are following in the 30s. Korea's goal is to overtake the United Kingdom, France, Singapore, Canada, and Japan and take a clear third place with a gap over fourth. For Korea to achieve AI G3 (AI global top 3), securing AI safety is essential."

Kim Myung-joo, 62, the inaugural head of the AI Safety Institute (AISI), said in an interview with ChosunBiz at the institute in Sampyeong-dong, Bundang-gu, Seongnam, Gyeonggi, on the 28th, "Securing AI safety under government leadership is not regulation but a promotion policy." Kim added, "Achieving AI G3 is not easy, but Korea must be No. 1 in the world in AI safety to secure AI competitiveness."

The AI Safety Institute was established last November under the Electronics and Telecommunications Research Institute (ETRI). It is the sixth government-led AI safety institute after the United Kingdom, the United States, Japan, Singapore, and Canada.

Kim is an expert who covers ethics, security, technology, and copyright related to AI. After earning a Ph.D. in computer engineering from Seoul National University, Kim has served as a professor in the Department of Information Security at Seoul Women's University since 1995. Kim has served as head of the Right AI Research Center, Chairperson of the International AI Ethics Association, Chairperson of the AI Ethics Policy Forum, and vice Chairperson of the Korea Copyright Commission (KCC). Since last year, Kim has been an expert member of the OECD Global Partnership on AI (GPAI), working to promote the development and use of safe and responsible AI. The following is a Q&A with Kim.

Kim Myeong-ju, head of the AI Safety Research Institute./Courtesy of Ahn Sang-hee

─It has been a year since the AI Safety Institute was founded. What are the achievements and plans so far?

"The past year was a time to find Korea's place in the international community. When Vice Prime Minister Bae Kyung-hoon recently visited the institute, we were asked to draw up a 'comprehensive plan (tentative name) to build a national AI safety ecosystem,' which we plan to complete within the year. We will set out and present a plan for how ministries will cooperate when AI risks arise. We are also preparing for the AI Basic Act, which is scheduled to take effect in January. Going forward, the institute will focus on recognizing AI risks, presenting related policies, researching evaluation criteria and methods, technology and standardization, international exchanges and cooperation, and ensuring safety."

─Many view the AI Safety Institute as a regulatory body.

"It is absolutely not a regulatory body. Certification of safety by the AI Safety Institute should be seen as part of building competitiveness. Domestic AI models have to compete with Google Gemini and OpenAI ChatGPT, which is not easy. In a situation where we are not technologically ahead, we need to put forward safety as our competitive edge. The AI Safety Institute helps domestic corporations enhance their AI competitiveness by leveraging safety. For foreign AI used in Korea, we will assess the impact on the domestic sphere. This can serve as basic data for future administration. Beyond developing AI safety evaluation technologies and helping formulate policies, we plan to provide AI safety consulting when resources allow."

─How should we prepare for the AI Basic Act taking effect in January?

"What sets the AI Basic Act apart from the European Union (EU) AI Act is its focus on promotion rather than regulation. The EU imposes fines of about 5% to 7% of total revenue if corporations violate AI safety, but the largest fines under the AI Basic Act are around 30 million won. It is a symbolic amount. The point is not imposing fines. The AI Basic Act also includes incentives, such as granting priority in public-sector procurement if an AI safety evaluation is conducted according to the guidelines."

─What are the AI safety evaluation criteria?

"AI safety is defined as protecting people's health, lives, and property from AI and enhancing social trust. We evaluate on 'fairness' regarding whether AI discriminates against people, 'trustworthiness' regarding whether it states facts, 'value alignment' regarding whether its values align with common sense when asked, for example, whose territory Dokdo belongs to, 'safety and security' against hacking attacks, 'dual use' regarding whether it answers about bomb-making or military weapons, and 'efficiency' such as agentic AI."

─What is the most dangerous aspect of AI?

"I see 'dual use,' such as adult content and weapons manufacturing, as the most dangerous. People say through ChatGPT that 'Generative AI has democratized knowledge,' but if it is used for adult content or weapons manufacturing, safety and even security can be threatened. Worrying that jobs will disappear due to AI is unfounded, and if we do not establish AI safety measures, we can expect organization- and national-level damage from dual use. That is why the United Kingdom recently changed the name of its AI Safety Institute to the 'AI Security Institute.'"

※ This article has been translated by AI. Share your feedback here.