Startup Alliance said on the 7th that it held the "AI Basic Act transparency and accountability roundtable" at the National Assembly Members' Office Building on the 6th with Rep. Hwang Jeong-a of the Democratic Party of Korea. Startup Alliance co-hosted the event with the Korea Startup Forum and Kodit Global Policy Empirical Research Institute.

The roundtable stemmed from concerns that obligations the industry will actually have to bear under the AI Basic Act, set to take effect on the 22nd, are unclear. Participants focused their discussion on a direction for institutional design that is predictable and works effectively.

The AI Basic Act Transparency and Accountability Roundtable takes place at the National Assembly Members' Office Building on the 6th. /Courtesy of Startup Alliance

Rep. Hwang Jeong-a said, "The government is striving to include minimal regulations, but concerns are still being raised on the ground," and added, "As the law will be implemented for the first time in the world, we will continue to supplement the system after implementation by reflecting voices from the field."

Lim Jung-wook, head of Startup Alliance, said, "Global competition in the AI industry is fierce, and the pace of technological change is fast, so when designing a new regulatory framework, we need to consider not only speed but also effectiveness, predictability, and international consistency," and added, "I hope today's on-the-ground views will be reflected in the process of refining the enforcement decree and guidelines so that the AI Basic Act can take root not as yet another burden on corporations but as a foundation for trust and innovation."

Choi Seong-jin, head of the Startup Growth Research Institute, who delivered the keynote presentation, agreed with the AI Basic Act's goal of ensuring transparency and accountability, but said the draft enforcement decree lacks specificity and predictability to implement those principles in the industry. Choi said, "Standards and procedures for applying key provisions such as designation of high-impact AI, labeling obligations for Generative AI outputs, and the establishment of risk management systems are not clear," and expressed concern that "this could spread unnecessary regulatory risk across the industry."

Choi, in particular regarding the designation of high-impact AI, said, "Because judgments can vary depending on the context of use and scope of impact rather than simply the type of technology, standards that businesses can anticipate on their own must be prepared before imposing legal obligations." He added, "If the system operates with ambiguity about whether something is covered, focusing on ex post investigations or measures, startups may end up abandoning related services as a risk-avoidance tactic."

Choi also said, "The obligation to label Generative AI outputs lacks specific standards on when and how to indicate to users whether an output is generative," adding, "For unstructured content such as voice, images, and video, it may be technically unfeasible or could harm the user experience. Rather than uniform and rigid labeling obligations, the system should be designed flexibly according to risk and intended use."

In the ensuing general discussion, Jeong Joo-yeon, senior expert at Startup Alliance, said, "The current draft enforcement decree, which assesses safety based on the cumulative computation of an AI system, does not reflect actual service architectures or technological realities." Jeong added, "Startups that use external APIs or open-source models could be forced to shoulder responsibility even when they cannot measure or control computation," and said, "It is more in line with technological reality to set safety standards at the model level, not the AI system level."

Lee Ho-young, head of Toonsquare, an AI authoring-tool corporation, said, "Given the on-the-ground reality where the boundaries among users, service providers, and developers are not clear, flexible application of the AI Basic Act is needed." Lee then expressed concern that "in situations where standards and criteria are unclear, corporations may instead become self-restrained or repeatedly transfer operations overseas."

Choi Woo-seok, Director of the Artificial Intelligence Safety and Trust Support Division at the Ministry of Science and ICT, said, "Because AI is inherently a highly uncertain industry, we will operate the system with grace periods and guidance to minimize excessive burdens on early-stage corporations and will pair that with ample communication and support." Choi added, "For matters that are difficult to address by revising laws and decrees, we will work to provide more specific interpretive standards through the enforcement decree and guidelines."

※ This article has been translated by AI. Share your feedback here.