An AI Basic Act implementation briefing takes place on the 24th at the National Information Society Agency (NIA) Seoul Office in Jung-gu, Seoul, hosted by the Ministry of Science and ICT./Courtesy of Shim Min-gwan

The Basic AI Act defines high-impact artificial intelligence (AI), which is subject to regulation, as 10 areas (through the enforcement decree), including energy and health care. The government will strictly interpret the legal provisions for high-impact AI and plans to impose only minimal regulations.

The Ministry of Science and ICT stated accordingly at a briefing on Dec. 24 at the National Information Society Agency (NIA) Seoul office in Jung-gu, Seoul, to prepare for the enforcement decree of the Basic AI Act. Korea plans to enforce the Basic AI Act on Jan. 22 next year, becoming the first country in the world to implement an AI law.

Lee Jin-su, director general for artificial intelligence policy at the Ministry of Science and ICT, said, "We have repeatedly emphasized that we will grant at least a one-year grace period to ensure minimal regulation," adding, "At least one year takes into account the European Union (EU) and overseas trends and the pace of global technological development. We have also left open the option of flexibly extending the grace period."

Those classified as high-impact AI operators face stringent regulations. Under Article 34 of the Basic AI Act, high-impact AI operators are subject to obligations including establishing risk management plans, explaining criteria for deriving results, human oversight and supervision, and documenting and retaining related records. For small and midsize enterprises or startups, being classified as high-impact AI operators can inevitably increase the burden. This is why the government has signaled minimal regulation by strictly interpreting who qualifies as a high-impact AI operator.

Kim Guk-hyeon, head of the Artificial Intelligence Safety and Trust Policy Division at the Ministry of Science and ICT, said, "The criteria for high-impact AI are not fixed, and we believe they should be reviewed later depending on technological progress and social trends and flows," adding, "We will make them more specific by reflecting the views of relevant ministries and industry, and we will collect opinions and share related information through the tentatively named Artificial Intelligence Safety and Trust Support Desk."

The Artificial Intelligence Safety and Trust Support Desk is set to be established as part of support measures by the Ministry of Science and ICT to reduce confusion after the Basic AI Act takes effect. Director Kim said, "Institutions that participated in drafting the legislation, such as NIA and the Telecommunications Technology Association (TTA), and legal experts will operate it in the form of a desk," adding, "We will respond to corporations' inquiries through a website and frequently asked questions (FAQ)."

Industry has long taken issue with the vague definition of high-impact AI in relation to the enforcement of the Basic AI Act. Article 2 of the act defines high-impact AI as an AI system that is likely to have a significant impact on, or pose risks to, a person's life, physical safety, and fundamental rights. Through the enforcement decree, the government set criteria for cases in which a significant and dangerous impact may be given in 10 specific areas, including energy, health care, transportation, and education. However, there is considerable criticism that it is unclear how far a significant impact extends and what constitutes a risk. This contrasts with the European Union's legislation that classifies AI in detail into four categories—low risk, limited risk, high risk, and unacceptable.

The Ministry of Science and ICT said it would reply within 30 days upon request on whether an entity is a high-impact AI operator. Deputy Director Sim Ji-seop of the Artificial Intelligence Safety and Trust Policy Division at the ministry said, "If corporations ask the government to confirm whether their service is high-impact AI, the ministry will respond within 30 days at the latest," adding, "If the service or product is too complex to judge, the period can be extended by 30 days, but since extending to 60 days could burden industry, we will provide a detailed written explanation of the reasons for any extension." He also added, "We will guide them on what obligations they bear if they are classified as high-impact AI operators."

There was also mention that even if one is classified as a high-impact AI operator, the final authority for judgment lies with the court. Attorney Yeo Hyeon-dong of Yoon & Yang LLC said, "The Ministry of Science and ICT is obligated to answer whether an entity is a high-impact AI operator, but in the event of a dispute, the final judgment must be made by the court."

In addition, under Article 32 of the Basic AI Act, the Ministry of Science and ICT narrowed the scope of AI subject to safety assurance obligations to AI with a cumulative compute of at least 10 to the 26th power FLOPs. It is known that there is no AI foundation model in Korea that has recorded compute at this scale. In effect, domestic corporations may fall outside the scope of safety assurance obligations. However, the industry does not rule out the possibility that domestic corporations could later be designated as subject to safety assurance obligations through revisions to the enforcement decree, because, as seen in the DeepSeek case, technological advances can improve the performance of small and midsize AI models with lower compute.

Deputy Director Sim Ji-seop said, "We are not considering relaxing the cumulative compute threshold or expanding the scope at this time," adding, "However, if another reasonable assessment method besides cumulative compute is established as a global standard, we will consider reflecting it in our law."

※ This article has been translated by AI. Share your feedback here.