Chairperson Song Kyung-hee of the Personal Information Protection Commission delivers remarks at an on-site roundtable to improve privacy policies in the generative AI sector, held at the Korea Press Center in Jung-gu, Seoul, on the afternoon of Mar. 4. /Courtesy of Personal Information Protection Commission

There was an opinion that major Generative AI corporations need to improve their privacy policies, which are deemed inadequate.

The Personal Information Protection Commission said at the "roundtable for improving privacy policies in the Generative AI sector," held at the Korea Press Center in Jung District, Seoul, on the 4th, "According to last year's privacy policy evaluation results, many cases were found in which the adequacy, readability, and accessibility of privacy policies in the Generative AI sector were lacking compared with other sectors," and stated accordingly.

At the roundtable, 11 major domestic and overseas Generative AI corporations and AI experts attended, including Google, Meta, Naver, Kakao, Microsoft, OpenAI, SK Telecom, LG Uplus, NC AI, Scatter Lab, and Wrtn Technologies.

Since 2024, the Personal Information Protection Commission has been conducting the "privacy policy evaluation," a system that evaluates policies established and disclosed by personal information handlers. Aimed at enhancing transparency and accountability in personal information processing, it targets flagship services that use new technologies such as AI and Autonomous Driving or process large-scale sensitive information and personal information.

Chairperson Song Gyeong-hee said, "As the information handled by AI expands widely beyond text to location and movement paths, voice, and video, the scope and methods of data processing have become more complex," adding, "In such a complex data environment, transparency that helps users anticipate how their personal information is processed becomes the core foundation for forming public trust, and the representative means of implementing that transparency is the privacy policy."

According to the Personal Information Protection Commission's evaluation last year of seven sectors—connected cars, Edtech, smart homes, Generative AI, telecommunications, reservation and customer management, and health management apps—the overall average score was 71 points, a significant rise from 57.9 points the previous year.

In the Generative AI sector, relatively many cases of inadequate privacy policies were found, the Personal Information Protection Commission said. Personal Information Protection Commission Chairperson Song Gyeong-hee said, "Representative examples include failing to state the legal basis for processing information entered in prompts (commands) and guiding the method of exercising data subject rights only in English."

According to the Personal Information Protection Commission, some services broadly listed the "personal information items processed," did not specifically disclose the "legal basis for processing," or expressed the "retention and use period of personal information" ambiguously. Cases were also found where, while providing personal information to third parties, abstract terms such as "partners" and "service providers" were used without clearly identifying the recipients, or where handling of complaints related to personal information was delayed. Some mobile apps required logging in or going through multiple steps to check the policy, leading to an assessment that improvements were needed in terms of accessibility.

In response, the Personal Information Protection Commission said it would support the rapidly spreading and advancing Generative AI services in improving their policies to be more concrete and tailored to users' perspectives.

Hwang Ji-eun, head of the Autonomous Protection Policy Division at the Personal Information Policy Bureau of the Personal Information Protection Commission, who released the policy evaluation results that day, said, "To resolve on-the-ground uncertainty driven by the spread of AI agents, we will prepare an AI guide that includes safe data processing standards and protective measures, and operate an 'AX innovation support' help desk that supports safe AI use by checking privacy risks from the service planning stage."

The attending corporations said that, due to the technical characteristics of Generative AI, processing structures are complex and coordination with global headquarters policies is difficult, expressing their difficulties. They also agreed on the need for clearer and easier-to-understand explanations to build user trust.

In particular, there was also an opinion to improve, in ways users can intuitively understand, whether information entered in prompts is used for training, the retention period, and the opt-out procedures that allow users to refuse such use.

Based on the discussions at this roundtable, the Personal Information Protection Commission plans to supplement the guidelines for drafting privacy policies and publish a revised version of the guidelines next month. It also plans to hold briefings so that corporations and institutions can fully understand the revised standards and apply them in the field.

Chairperson Song said, "When users can easily see how their information is used, trust in AI can also rise," adding, "We will continue to communicate with corporations and establish reasonable standards that can be applied in the field."

※ This article has been translated by AI. Share your feedback here.