Kakao logo. /Courtesy of Kakao

Kakao announced on the 24th that it will make its lightweight inference model 'Kanana-1.5-v-3b' and MoE model 'Kanana-1.5-15.7b-a3b' open source using its own technology. This model is a multimodal language model capable of processing images and text simultaneously, boasting excellent performance in accurately understanding and responding to image and text-based questions.

'Kanana-1.5-v-3b' is a lightweight multimodal model capable of processing not only text but also image information, and it has been recognized for its excellence in performance and expense efficiency compared to existing global multimodal language models. This model has 14 billion parameters and possesses outstanding image understanding and instruction execution capabilities in both Korean and English, delivering superior performance at a relatively lower expense than existing models. Kakao has contributed to cost reduction and performance enhancement in AI models through this.

Kakao also unveiled the MoE model 'Kanana-1.5-15.7b-a3b'. This model is designed to activate only 3 billion of its 15.7 billion parameters for inference, allowing for much more efficient resource utilization compared to existing models and enabling high-efficiency services at a low expense. In particular, this MoE architecture is expected to provide significant expense reductions in AI infrastructure development, offering practical and effective support for AI research and development.

Kakao plans to enable researchers and developers to freely utilize AI technology through these models and contribute to enhancing the self-sufficiency and competitiveness of the domestic AI ecosystem. Additionally, by applying the Apache 2.0 license, which allows for commercial use of AI technology, it is helping startups and researchers experiment with various services based on this.

Kakao intends to continue advancing its AI models based on its own technology and take on challenges for the development of globally competitive super-large models.

※ This article has been translated by AI. Share your feedback here.