LG AI Research said on the 30th that it unveiled the performance of "K-EXAONE (EXAONE (LG AI Research))" at the "independent AI foundation model project first presentation" hosted by the Ministry of Science and ICT at Coex in Gangnam District, Seoul.
LG AI Research said it applied LG's differentiated technology to K-EXAONE to achieve both efficiency and performance. Compared with EXAONE 4.0, it raises inference efficiency while cutting memory requirements and computation by 70%.
LG AI Research said it designed the model to run on A100-class GPU environments instead of expensive infrastructure. It said it significantly lowered build and operation expense burdens, so startups and small and midsize enterprises can more easily adopt frontier-grade AI models.
LG AI Research said at the presentation that it set China Alibaba's "Qwen3 235B" as the first-stage performance target model. It achieved an average score of 72.03 across 13 first-stage evaluation benchmarks, reaching 104% of the performance of the first target model Qwen3 235B (69.37). Compared with GPT-OSS 120B (69.79), OpenAI's latest open-weights model in the United States, it also showed 103% performance.
An LG AI Research official said, "K-EXAONE has achieved the ambitious goal of delivering performance at 100% or more of the latest global AI models," adding, "Based on LG's differentiated technology, we will continue to advance K-EXAONE's performance and help strengthen national competitiveness."