Kakao updated the performance of its in-house next-generation language model "Kanana-2" and additionally released four models as open source via Hugging Face.
The update improves practicality so small and midsize businesses and academic researchers can use high-performance AI without expense through high-performance, high-efficiency technology, and it runs smoothly on GPUs at the level of Nvidia A100.
"Kanana-2" significantly improved compute efficiency through a Mixture of Experts (MoE) architecture, and by adding "mid-training" during the training phase, it secured new reasoning abilities without losing prior knowledge. Kakao plans to use this model to increase its contribution to the AI research ecosystem and promote AI adoption among domestic corporations.
In particular, the model is optimized for implementing agentic AI, showing performance that accurately understands complex user instructions and can independently choose tools. Kakao is currently developing a 155B-parameter large model and plans to unveil a more advanced AI aiming for top-tier global performance.