ChatGPT DALL·E 3

As presidential primary candidates pledged large-scale investments in artificial intelligence (AI), the "AI basic law," aimed at fostering the domestic AI industry and minimizing side effects from AI proliferation, is drawing attention. The industry has warned that vague regulatory standards in the AI basic law could hold back AI industry growth. Recently, Hwang Jeong-a of the Democratic Party of Korea proposed a bill to defer the regulations newly established by the AI basic law for three years, and tensions are expected ahead of the law's implementation next year.

Industry sources said on the 18th that the AI basic law (the Framework Act on the Development of Artificial Intelligence and the Establishment of a Trust-based Environment) passed the National Assembly plenary session in December last year and will take full effect on Jan. 22 next year. It is the world's second AI-related statute enacted after the European Union's AI Act. Although the EU enacted its law first, Korea is expected to be the first in the world to put it into effect. The EU's AI law will take effect in August next year.

The AI basic law focuses mainly on industry fostering and building a trust framework (risk management). The law divides AI into "high-impact AI," which has a major effect on life, safety and fundamental rights, and "general AI," and obligates high-impact AI operators to provide prior notice and undergo inspection and certification. The law also includes a regulation that operators must notify results created using generative AI, such as deepfakes, by indicating their source (watermark).

The amendment proposed by Representative Hwang would defer such regulations until Jan. 2029, three years later. Hwang said, "As global competition for AI hegemony intensifies, there are concerns that immature regulatory policies could cause us to miss the golden time to become an AI powerhouse."

So far, the AI industry has urged that ambiguous or ill-fitting regulatory provisions in the AI basic law be supplemented to match the realities of the domestic AI ecosystem. The Ministry of Science and ICT formed a task force to revise subordinate statutes of the AI basic law earlier this year and is currently drafting enforcement decrees. With little time left before the law takes effect, concerns have been raised about whether it will be possible to prepare detailed and effective enforcement decrees.

Graphic = Son Min-gyun

The key issues industry is focusing on for the AI basic law enforcement decree are 1) the definition of high-impact AI, 2) mandatory watermarking, and 3) government investigation powers.

A representative criticism is that the criteria for high-impact AI are overly abstract and broad. The law classifies AI systems that "may have a serious impact on bodily safety and fundamental rights or pose a risk" as high-impact, focusing on industries such as energy, health care, transportation and lending, but it is unclear exactly which AI systems fall into this category.

The Business Software Alliance (BSA), whose members include major IT companies such as Microsoft, OpenAI and Amazon Web Services (AWS), argued in a submitted opinion to the MSIT last month that high-impact AI should be classified by how it is used rather than by system or industry sector. For example, if AI is used only to calculate credit scores, it would be hard to deem it "high-impact" even if the field is lending.

Regulation requiring disclosure of AI-generated content is also controversial. Currently, AI is often used as a simple assistive tool in production processes for movies, webtoons and animations, such as generating background images, and the industry says that requiring watermarking in each of these cases could lower content quality and hinder creative activity.

There are also opinions that excessive fact-finding and inspection and certification should be avoided because government inspections and certifications of operators' high-impact AI-related businesses or products could lead to side effects such as personal and sensitive information leaks and cybersecurity threats.

A representative of the startup group Startup Alliance said, "The startup industry worries that the task force revising subordinate statutes of the AI basic law does not include corporate personnel who actually operate AI models or services or technical experts from the industrial field, so the voices from the field may not be sufficiently reflected."

Choi Byung-ho, a professor at Korea University's AI Research Institute, said, "The AI industry is currently in a situation of rapidly accelerating expansion, and there is a great risk that the law will fall behind the pace of technological development," adding, "It is necessary to prepare subordinate statutes so that the AI basic law can be updated in a timely manner in line with its purpose of fostering the industry."

The MSIT said it will implement the AI basic law in January next year as planned after collecting industry opinions. It plans to finalize the AI basic law as early as June and promulgate enforcement decrees containing detailed rules in July or August. A MSIT official said, "We are preparing enforcement decrees that minimize regulation and emphasize promotion by gathering industry opinions as widely as possible."

※ This article has been translated by AI. Share your feedback here.