The National Assembly passes a partial amendment to the Framework Act on the Promotion of Artificial Intelligence and the Establishment of a Trust-Based Environment at a plenary session last month./Courtesy of Yonhap News

The AI framework act, aimed at regulating and promoting the artificial intelligence (AI) industry, fully took effect on the 22nd. From this day, domestic AI operators must label content created using Generative AI such as ChatGPT and Gemini with a watermark indicating "AI use." The government decided to defer enforcement for one year to minimize on-the-ground confusion, but corporations worry the world's first AI framework act could hinder the development of the domestic AI industry. Corporations are working to understand the act's impact and craft countermeasures to prevent potential side effects.

The AI framework act is a law established on Jan. 21 last year to foster the domestic AI industry and build a foundation for safe use. While it is the second comprehensive AI law, covering both support and sanctions for the AI industry, after the European Union (EU), Korea is the first to implement it in full. The Ministry of Science and ICT emphasized that "about 80% of the provisions focus on industrial promotion," but corporations are paying attention to the remaining 20% that impose regulations.

The regulations set out in the AI framework act focus on preventing the use of dangerous AI. The obligation to ensure transparency by marking AI-generated outputs with watermarks is representative. For deepfake outputs that are hard to distinguish from reality, it requires clearly notifying through a watermark that AI was used. For content that is easy to identify, such as animation or webtoons, invisible digital watermarks may be used. Creative works such as film, drama, fine art, and literature may also indicate the use of AI as long as it does not interfere with immersion in the work.

The watermark labeling requirement applies to operators, not individuals. This means users who utilize AI-generated outputs in content production or post them on social media (SNS) are not subject to regulation. Authors who draw webtoons using AI and influencers who produce and upload YouTube "Shorts" or Instagram "Reels" with AI assistance fall under users rather than operators, so the obligation does not apply.

The AI framework act also designates "high-impact AI" as subject to regulation. The government defines high-impact AI as "AI that has a significant effect on the protection of human life, safety, and fundamental rights," and presented 10 domains including energy, healthcare, hiring, nuclear power, criminal investigation, transportation, and education. "Vehicles at level 4 or above, which is the fully Autonomous Driving stage," are cited as a representative example of high-impact AI. Whether a person was involved in the final decision-making process is also a criterion for determining high-impact AI. For example, even if AI recommends candidates during hiring, if the HR team makes the final decision, it is deemed controllable and excluded from high-impact AI.

If an AI operator violates obligations such as advance notice of AI use, fines of up to 30 million won will be imposed. However, in line with the policy of operating a guidance period of more than one year, fact-finding and the imposition of fines will also be deferred.

Domestic corporations are moving quickly to respond to the enforcement of the AI framework act. Although there is a grace period, they say it is better to prepare preemptively to minimize confusion.

Kakao, which distributes AI-produced content, will revise its service terms and apply them starting Feb. 5. The new terms add a clause stating, "Services provided by the company may include services operated based on AI, and when providing outputs generated by AI, notice and labeling will be made in accordance with relevant laws." Kakao is inserting a visible watermark reading "Kanana" in all videos generated with the AI agent Kanana or with "AI templates."

Domestic game companies plan to respond during the one-year grace period in accordance with the Ministry of Science and ICT's detailed guidelines. Some game companies, including Krafton, are already notifying the use of AI in games offered in the global market as part of their response to the EU's phased implementation of the AI Act. A company official said, "Titles such as 'Battlegrounds' and 'Enjoy' indicate whether AI is used via the Steam game platform."

However, startup officials and some IT corporations still say the AI framework act is "riddled with contradictions and confusing." A representative concern is that the criteria for high-impact AI are ambiguous, making it difficult for corporations to judge on their own whether they fall under high-impact AI. They say the risk is high because, if later embroiled in legal disputes, whether something constitutes high-impact AI could hinge on a court's judgment.

In a survey conducted late last year by Startup Alliance of 101 domestic AI startups, 98% said they had "not established a response system for the enforcement of the AI framework act." Choi Ji-young, executive director of the Korea Startup Forum, said, "Startups often lack sufficient personnel and resources to respond to regulatory changes," and added, "For the AI framework act to work properly in the field, we need a reasonable system, clear standards and interpretations, and support infrastructure that lowers compliance expense."

There is also criticism that the AI framework act has limited effectiveness in responding to deepfake crimes. Deepfake generation and distribution, like xAI's "Grok," which recently stirred controversy for generating child deepfakes, mostly take place on overseas platforms, making it difficult to sanction overseas AI corporations. The government requires overseas operators that meet any one of the following—100 billion won in domestic revenue, 1 trillion won in global revenue, or more than 1 million average daily users—to designate a domestic agent, but only a handful of corporations, including Google and OpenAI, meet these thresholds.

Experts advised the government to communicate actively with the industry during the regulatory grace period to establish reasonable guidelines. Choi Byung-ho, a professor at the Human Inspired AI Research Institute at Korea University, said, "The government should gather as much input from industry as possible over the next year and set detailed guidelines," adding, "Because the law cannot keep up with the pace of change in the AI industry, even after the guidance period ends, it will be necessary for the government and stakeholders to meet periodically to revise and implement the law."

※ This article has been translated by AI. Share your feedback here.