LG AI Research unveiled K-EXAONE (EXAONE (LG AI Research)), an artificial intelligence (AI) model at the world's highest level. With the global top 10 AI models dominated by China (six) and the United States (three), K-EXAONE was the only one from a Korean corporation to make the list.
On the 11th, LG AI Research said K-EXAONE ranked first in 10 of the 13 benchmark tests that make up the first-round evaluation criteria for the government's independent AI foundation model project. It also posted an average score of 72, showing the best performance among models developed by the five elite teams.
In the Intelligence Index assessment by Artificial Analysis, a global AI performance evaluation agency, K-EXAONE scored 32 points, ranking seventh worldwide and first in Korea among open-weight models that disclose weights.
K-EXAONE also climbed to No. 2 in the global model trend rankings on Hugging Face, a global open-source AI platform, immediately after being released with open weights.
It then was listed among the "Notable AI Models" by Epoch AI, a U.S. nonprofit AI research institute. Starting with "EXAONE 3.5" in 2024, LG AI Research listed five models—the most among Korean corporations—through last year, including "EXAONE Deep," "EXAONE Path 2.0," and "EXAONE 4.0."
LG AI Research refined "Hybrid Attention," a core technology validated in EXAONE 4.0, and applied it to "K-EXAONE." Attention functions like a brain that determines which information to focus on when an AI model processes massive amounts of data.
LG AI Research expanded the training vocabulary to 150,000 and applied a method that groups frequently used word combinations into one, among other tokenizer upgrades, enabling it to remember and process documents 1.3 times longer than previous models. A tokenizer is a technology that splits sentences into tokens, the units an AI understands.
An LG AI Research official said, "K-EXAONE can run even in A100-class GPU environments, not on high-priced infrastructure, through a model design that boosts efficiency while lowering expense," adding, "By enabling corporations with limited infrastructure resources to adopt and use frontier-grade AI models, we aim to broaden the base of the domestic AI ecosystem."
LG AI Research designed the training process so the AI model does not stop at simply memorizing data but learns the logical steps to solve problems.
In the pretraining phase, LG AI Research used Thinking Trajectory data that teaches which reasoning process to use to solve a problem rather than giving the answer.
In the post-training process, LG AI Research applied its own technologies, including AGAPO, a reinforcement learning algorithm that finds lessons even in wrong answers instead of discarding them as in conventional methods, and GrouPER, a preference learning algorithm that compares multiple responses to teach a more natural tone favored by humans.
LG AI Research is conducting data compliance evaluations for all training data, including identifying and excluding data with copyright issues in advance.
Through its own AI ethics committee, LG AI Research established an AI risk classification system—▲ universal human values ▲ social safety ▲ Korea's particularities ▲ response to future risks—and also tested the safety of the AI model.
K-EXAONE scored an average of 97.83 across four institutional sectors on "KGC-SAFETY," a metric LG AI Research developed to assess Korea's particularities. That is higher than the GPT-OSS 120B model (92.48) by U.S.-based OpenAI and the Qwen3 235B model (66.15) by China's Alibaba.
Choi Jung-gyu, head of the Agentic AI Group at LG AI Research, said, "K-EXAONE is a case that shows it can compete on equal footing with global large models through independent technical design within resource constraints," adding, "With the confidence that we are developing Korea's representative AI, we will focus on research and development to build a model that contributes to the advancement of the global AI ecosystem beyond Korea's AI ecosystem."