Baek Jun-ho, CEO of FuriosaAI./Courtesy of FuriosaAI

Artificial intelligence (AI) semiconductor corporation FuriosaAI announced on the 22nd that its second-generation AI inference accelerator 'Renegade (RNGD)' has been implemented in LG's large-scale language model (LLM) 'EXAONE.'

FuriosaAI plans to launch an enterprise EXAONE solution based on Renegade.

Previously, FuriosaAI and the LG AI Research Institute applied Renegade to a pilot environment of the EXAONE 3.5 model and conducted tests for about 8 months. As a result, it was proven that using Renegade meets the conditions required by LG while improving performance per watt by 2.25 times compared to existing graphics processing units (GPUs).

This means that the specifications required for large-scale generative AI services were achieved while solving the excessive power consumption problem of GPUs.

Jeon Gi-jeong, the product unit head of LG AI Research Institute, noted, "After reviewing various GPUs and neural processing units (NPUs), we determined that Renegade was the most suitable and conducted this verification. We highly evaluate that Renegade demonstrates excellent absolute performance while drastically reducing total cost of ownership (TCO) and that the model support process is easy."

FuriosaAI is also preparing support for the newly released EXAONE 4.0 model, a successor to EXAONE 3.5, which is equipped with global competitiveness. The plan is to continuously enhance inference optimization technology and software functions, gradually replacing the existing GPU-based enterprise AI ecosystem with its own NPU. Additionally, collaboration for an enterprise on-premises turnkey 'EXAONE AI Solution' based on Renegade is expected to proceed.

Baek Joon-ho, CEO of FuriosaAI, said, "EXAONE is emerging as Korea's national foundation model. Through continuous cooperation, we aim to contribute to building a high-performance national AI infrastructure. This collaboration will serve as an important model for global corporations that seek to actively design and operate AI infrastructures beyond mere implementation."

※ This article has been translated by AI. Share your feedback here.