Artificial intelligence (AI) leading corporations including Google, Meta, Microsoft (MS), and OpenAI discussed in Seoul how to use open-source AI safely.
The Personal Information Protection Commission held "Open-Source Day" on the 15th as a side event ahead of the opening of the Global Privacy Assembly (GPA). More than 120 participants joined, including global open-source model and solution corporations such as Google, Meta, Microsoft, OpenAI, SelectStar, and Aim Intelligence; domestic AI corporations and researchers; and overseas supervisory bodies.
According to a brief survey conducted by the Personal Information Protection Commission before the event, about 62% of the 70 developers, researchers, and corporate officials who responded said they had experience adopting and using open source. The share who said they considered safety when fine-tuning with open-source models was 77%.
Global open-source AI corporations participating in the event introduced their open-source ecosystems and real-world application experience.
Google introduced its platform (Vertex AI) for expense-efficient operation of open-source models and shared ways to use tools for ensuring reliability and safety, such as LLM quality evaluation tools, prompt optimization features, and safety enhancement tools.
Aim Intelligence shared real-world experiences along with the diverse safety and information security challenges that corporations face on the ground when operating AI services for customers and using in-house AI models. The company advanced Meta's open-source AI filtering model Llama Guard to fit Korean conditions and won the "Llama Impact Innovation Awards."
Microsoft disclosed a customer case that built agent AI based on its platform Azure AI Foundry, and suggested the potential to use open-source models and tools to build agent AI, which is drawing attention as a next-generation paradigm.
Naver introduced tools for safe open-source use—including public datasets and benchmarks and an AI safety framework—along with its open-source model HyperCLOVA X, and then shared its past efforts that contributed to the spread of the domestic open-source ecosystem.
OpenAI presented the economic and social value of its newly released open-source model, and raised challenges encountered in the spread of open source, such as concerns over safety and accountability and the need for discussion at a global level.
In the live Q&A session that followed, attendees shared on-the-ground voices and solutions, including difficulties experienced during the introduction of open source and privacy-related concerns.