Park Sang-hyun, researcher at Theori Inc. /Courtesy of Theori Inc.

Theori Inc. said on the 16th that a research paper on "AI model merging," with its affiliated researcher as the first author, was selected as a presentation paper at ACM/SIGAPP SAC 2026, a prestigious international academic conference.

The conference evaluates both academic achievements and industrial contributions in computer science and AI. It is also one of the international conferences officially recognized by the Ministry of Education's BK21 (Brain Korea 21) program.

The paper was co-authored with Theori Inc.'s large language model (LLM) security solutions team researcher Park Sang-hyeon as the first author, and its title is "High-speed & high-reliability methodology for distributed asynchronous federated learning" (FRAIN to Train). The study empirically demonstrated and mathematically proved that AI model merging can operate stably even in environments where untrusted participants, extreme data imbalance, and severe network delays coexist.

The study offers important implications for improving the reliability of AI models in distributed and asynchronous environments and was applied to advance Theori Inc.'s LLM security solution, aprism. It enabled the checkpoint merging method actually used in aprism's Identifier model to be improved to deliver more consistent performance and stable decisions even in incomplete and adversarial environments.

The study is scheduled to be presented at SAC 2026, to be held in Thessaloniki, Greece, from Mar. 23 to 27, 2026.

Park Sang-hyeon, a researcher at Theori Inc., said, "This study addresses issues that must be considered when applying AI models to real-world environments," and added, "We hope the results will serve as a reference for future domestic LLM security research and practical discussions."

※ This article has been translated by AI. Share your feedback here.