Han Bo-hyung, a professor in the Department of Electrical and Computer Engineering at Seoul National University, is selected as the October winner of the Scientist of the Month award. /Courtesy of Ministry of Science and ICT

The Ministry of Science and ICT and the National Research Foundation of Korea (NRF) said on the 1st that they selected Han Bo-hyung, a professor in the Department of Electrical and Computer Engineering at Seoul National University, as the October recipient of the Scientist and Engineer of the Month award.

The award selects one researcher each month who has contributed to advances in science and technology with original research achievements over the past three years and confers the Minister of the Ministry of Science and ICT Award and 10 million won in prize money.

To mark the "2025 Artificial Intelligence (AI) Week (Sep. 30–Oct. 2)," the Ministry of Science and ICT and the National Research Foundation of Korea (NRF) selected Professor Han Bo-hyung, an AI expert in the field of computer vision, as the recipient. Han was credited with enhancing the global standing of Korea's AI technology by developing a new AI inference algorithm that can generate infinitely long videos without additional training.

Video generation is considered a field that is technically much more difficult than text or image generation. Conventional diffusion models generate videos by starting from random noise and gradually restoring images, but they have the limitation that memory usage soars as the video length increases.

To solve this problem, Han developed the "FIFO-Diffusion" algorithm. The key is a "diagonal denoising" technique that arranges frames sequentially like a conveyor belt and generates the video in order from the front. Thanks to this method, memory usage remains constant no matter how long the video becomes. In addition, by adding "Latent Partitioning," which divides the video into small segments to improve stability, and "Lookahead Denoising," which improves quality by leveraging clean frames in the front, the method secures image quality and temporal consistency even in long videos.

The results were presented in Dec. last year at the Conference on Neural Information Processing Systems (NeurIPS), a world-renowned conference, and the source code released by the research team has received more than 450 stars on GitHub and is being used by researchers and developers around the world.

Han said, "This study is significant in that it addressed the fixed-length and memory limitations of existing video generation models with a new inference algorithm," adding, "In the future, it will greatly reduce expense and time in various content production settings such as films, games, and advertising."

※ This article has been translated by AI. Share your feedback here.