SK Telecom logo. /Courtesy of SKT

SK Telecom said on the 7th that an elite team released a technical report for its ultra-large AI model with 519B (51.9 billion hundred-million) parameters, "A.X K1 (adot X K1)," on the open-source platform Hugging Face. The company said it completed Korea's first model above 500B within roughly four months of development and under limited GPU resources.

The elite team estimated the total possible training volume with about 1,000 GPUs and, based on scaling laws, designed the target model size at 519B. To maximize efficiency relative to the resources投入, the team mathematically designed and operated the optimal training compute, and used about 10 trillion (10T) data points for training.

The data set included web, code, STEM, and reasoning; the team parsed Korean-language PDF documents to create synthetic data and also applied curriculum learning by difficulty. SKT stressed that the development proceeded without government support, relying solely on in-house GPU procurement. SKT said it secured performance despite the model size being at least twice that of other elite teams.

The company said performance is similar to or higher than overseas ultra-large open-source models for its scale. It scored 89.8 on the AIME25 math benchmark, surpassing the 685B model "DeepSeek-V3.1" (88.4). On LiveCodeBench, a real-time coding evaluation, it scored 75.8 in English and 73.1 in Korean, reaching 109% and 110% of DeepSeek-V3.1's 69.5 in English and 66.2 in Korean, respectively. The 357B model "GLM-4.6" was also included as a comparison.

A.X K1 adopts a mixture-of-experts (MoE) architecture that selectively activates only 33B out of 519B to secure training stability and efficiency. It can also handle long context of 128K tokens, which SK Telecom said allows it to process about 100,000 words in Korean at once. SK Telecom plans to add multimodal capabilities within the year and scale up to the trillion-parameter level.

※ This article has been translated by AI. Share your feedback here.