Domestic researchers develop artificial intelligence (AI) video restoration technology that vividly restores scenes appearing blurry like a foggy window or a foggy road, marking the world's first achievement. The photo shows the foggy appearance of the Central Highway./Courtesy of News1

A domestic research team developed artificial intelligence (AI) video restoration technology that vividly revives scenes that appear blurry, such as fogged windows or misty roads, marking a world first.

The Korea Advanced Institute of Science and Technology (KAIST) announced on the 31st that a joint research team led by Professor Jang Mu-seok from the Department of Bio and Brain Engineering and Professor Kim Jae-cheol from the Graduate School of AI developed "video diffusion-based image restoration technology." Diffusion is an emerging technology that gradually clarifies images using a diffusion-based AI technique. The findings of this research were published in the Journal of Pattern Analysis and Machine Intelligence, published by the Institute of Electrical and Electronics Engineers (IEEE), on the 13th.

Until now, restoration technology primarily targeted still images, leading to significantly reduced performance in environments that change over time, such as fog or smoke. Furthermore, it had limitations in that it only operated correctly under learned conditions, making it difficult to apply in a variety of real-world scenarios.

This technology does not merely restore a blurry image to a "still screen." By analyzing the continuity of scenes over time, it can stably revive the original appearance even in moving environments. It can produce clear images even in situations where the field of view keeps changing, such as through smoke or beyond a swaying curtain.

Diagram of optical measurement and restoration results. The moving target is behind a dynamically changing scattering medium, and the imaging system measures the brightness of the light that has been scattered. The researchers successfully restore clear images using only this measurement value./Courtesy of KAIST

The research team addressed the issue by finding common patterns in continuous scenes, taking into account that images follow a temporal sequence. This approach demonstrated performance that exceeds existing levels under various distance, thickness, and noise conditions.

In this study, researchers succeeded for the first time in the world in observing sperm movement beyond moving scatterers through restoration technology that reflects the passage of time. It was also confirmed that it can be immediately utilized in various situations, such as fog removal, image quality enhancement, and clarifying blurry images, without separate additional training.

Dynamic scattering medium used in the research. The scattering medium is mounted on a rotational stage to create a moving environment, and it is confirmed that the measured PSF matches the Gaussian PSF reproduced with the optical model./Courtesy of KAIST

There are various applications. The research team believes it can be used in everyday life and across industries for medical diagnostics that look inside skin or blood, rescue operations at fire scenes, safe driving on foggy roads, testing opaque materials, and ensuring visibility in murky water.

The research team proposed a new restoration method by combining optical models with video diffusion models, overcoming existing limitations. Researcher Kwon Tae-seong noted, "We confirmed that the diffusion model, which learns changes over time, is effective in restoring unseen data," and added, "We will widen our research to tackle various optical problems that require tracing the temporal changes of light."

(From left) Professor Ye Jong-cheol, doctoral student Kwon Tae-seong, doctoral student Song Guk-ho, Professor Jang Mu-seok./Courtesy of KAIST

References

EEE Transactions on Pattern Analysis and Machine Intelligence (2025), DOI: www.doi.org/10.1109/TPAMI.2025.3598457

※ This article has been translated by AI. Share your feedback here.