Naver Cloud announced on the 22nd that it has made available its lightweight inference model 'HyperCLOVA X SEED 14B Think,' developed with its own technology, as a commercial free open source. This model is an AI developed with original technology, not modified from overseas open source models, and it is expected to further enhance Korea's AI capabilities.
'HyperCLOVA X SEED 14B Think' is a model designed to be lightweight, stable, and cost-efficient, allowing the recently announced inference model 'HyperCLOVA X THINK' to be applied in services. Its main feature is that it prunes parameters of low importance while preserving as much knowledge as possible from the original model and significantly reduces learning expenses by transferring lost knowledge during the pruning process to a smaller model.
This model has a scale of 14 billion parameters and was trained with fewer GPU hours than the global open source model (500 million parameters). The learning expenses are only about 1% of those for existing models, and it has shown superior performance in evaluations related to the Korean language and culture, coding, and mathematics compared to models of the same size or larger.
Naver Cloud has made this model open source so that it can be used not only for research purposes but also in business, and it is expected to be utilized as AI agent-based technology across various industries. Naver Cloud's head of hyperscale AI, Sang-nak Ho, noted, 'We will continue to improve generative AI models with our own technology and lead the growth of the Korean AI ecosystem based on high performance and efficient learning strategies.'
Additionally, Naver Cloud's three lightweight HyperCLOVA X models, announced in April, have surpassed 1 million cumulative downloads by July, proving their usability and popularity. Based on this, more than 50 derivative models have been created and shared, and on-device AI services in the Korean language are being launched, rapidly expanding the HyperCLOVA X open source ecosystem.