Trillion Labs' Trida-7B model. /Courtesy of Trillion Labs

AI startup Trillion Labs said on the 29th that it succeeded in developing the large language model (LLM) "Trida-7B," which applies a diffusion-based transformer, with support from the National IT Industry Promotion Agency (NIPA).

Trillion Labs said it plans to compete in the government's additional selection round for the independent AI foundation model project. Based on a "diffusion" architecture independently implemented by a domestic startup, this model generates and processes sentences simultaneously, unlike the conventional autoregressive method, improving inference speed and computational efficiency.

Adopted by global Big Tech companies such as Google Gemini, this next-generation AI model approach posted high scores on Korean-language performance benchmarks. It received top scores of 61.26 in "math," 53.42 in "Korean instruction following," and 46.35 in "Korean commonsense."

※ This article has been translated by AI. Share your feedback here.