Select Star research team that contributes to the CAGE paper. From left, Researcher Lim Yong-taek and Kim Min-woo, AI Safety Head of Team./Courtesy of Select Star

Selectstar, a corporations specializing in AI data and reliability evaluation, said on the 19th that it was recognized for its technology at the world's most prestigious conference with its self-developed AI safety verification technology.

The Selectstar AI Safe team's research paper, "CAGE: A Framework for Culturally Adaptive Red-Teaming Benchmark Generation," was accepted to the main conference of ICLR 2026, to be held in Brazil in Apr.

ICLR is one of the most influential international conferences in AI. By Google Scholar standards, it is considered a top-tier conference in AI and Machine Learning. This year, only about 28% of roughly 19,000 submissions were accepted. In particular, the company said Selectstar's paper was selected for the main track, which is treated as the most important, earning global recognition for its originality and technical completeness. From planning to implementation, verification, and publication, the study was carried out solely by Selectstar's in-house staff, without assistance from external faculty or research institutions.

The core technology of this paper is a framework that automatically generates red team data to verify AI safety by reflecting each country's cultural and legal context. It is a technology that automatically generates test questions by language and cultural sphere to check whether AI responds safely when asked dangerous questions.

Conventional AI safety verification mainly used datasets developed in the English-speaking world, translated literally or freely. In such cases, it may fail to sufficiently reflect dangerous situations likely to occur in each country and miss AI's weaknesses. To address this, the Selectstar research team proposed the concept of a "Semantic Mold" and generated localized attack prompts that reflect a country's cultural characteristics. As a result, attack scenarios generated through CAGE showed a level of naturalness similar to those crafted by humans and delivered outstanding performance in terms of "attack success rate," penetrating AI models' defenses to uncover latent risks. In practice, CAGE showed excellent performance even in languages with limited data, such as Cambodian (Khmer).

The paper also unveiled KoRSET, a Korea-style safety benchmark created by applying CAGE to Korean circumstances. KoRSET is far more effective at identifying AI model vulnerabilities than datasets built through simple translation and proved to be optimized for safety verification based on Korean culture.

Kim Min-woo, Selectstar AI Safety Head of Team and corresponding author of the paper, said, "This ICLR acceptance is evidence that Selectstar has gone beyond being a simple data-building corporations to an AI technology corporations with unrivaled original technology," adding, "CAGE technology is already being applied to large corporations' AI projects to check model vulnerabilities and improve operational efficiency." Co-author researcher Lim Yong-taek said, "Beyond performance competition, 'safety' is now the core competitive edge," adding, "We will help set global standards with safety evaluation technology of a quality that can be used on the spot."

Meanwhile, based on this research achievement, Selectstar plans to expand its reliability evaluation solutions into various industries that require high levels of safety, including finance and the public sector. The paper is scheduled to be released in Mar. through the open-source platform Arxiv.

※ This article has been translated by AI. Share your feedback here.