As artificial intelligence (AI) technology has advanced recently, controversies over AI manipulation targeting public figures in politics and the entertainment industry have continued. Images and videos synthesized by AI, not actual remarks or actions, have spread quickly online, damaging the image and credibility of public figures. In response, the industry is moving in earnest to develop AI technology that catches "fake AI."
According to the related industry on the 26th, AI-manipulated content exploiting public figures has been spreading recently, fueling social controversy. In the entertainment industry, an AI manipulation dispute arose over allegations of underage dating between actor Kim Soo-hyun and the late Kim Sae-ron, and public opinion worsened around actor Lee Yi-kyung as fabricated evidence spread regarding his private life. In politics, deepfake videos distorting certain politicians' remarks or actions were shared and exploited in election phases. There were also cases uncovered in which sports stars such as soccer players Son Heung-min and Cristiano Ronaldo appeared to recommend online gambling sites through manipulated video and audio.
In response, the industry is moving in earnest to develop AI that distinguishes "fake AI." It is a method in which AI again determines and filters false information and manipulated content generated by AI. As Generative AI technology advances, the limits of relying only on human sight or hearing have become clear, increasing the need for technology-based detection systems.
Hancomwith, an affiliate of Hancom Group, is developing a system that determines whether something is a deepfake by using Deep Learning-based video analysis technology. Hancomwith is currently participating in an international joint study led by the Korean National Police Agency to develop a "system for determining the authenticity of false and manipulated content," conducting research with the University of Wuppertal in Germany. The study will run through 2027 to respond to the rise in false and manipulated content due to the spread of Generative AI, aiming to build a highly reliable dataset and develop an integrated detection system.
DeepBrain AI last month expanded the scope of existing deepfake detection to Generative AI-based content through a culture technology research and development project. It can detect images and videos produced on the latest global video generation platforms such as Google Veo and OpenAI Sora. The function is provided as an API (application programming interface), allowing external corporations and institutions to link verification functions without building separate systems. The company also provides its deepfake detection solution "AI Detector" to numerous public institutions, finance, and education. The technology applies a method that analyzes pixel-level differences to determine whether something is a deepfake.
NuriLab last month registered a domestic patent related to deepfake detection technology that combines AI algorithms and metadata analysis. The patent, titled "method for detecting deepfake outputs and device performing the method," enhances detection accuracy by adding metadata analysis to existing AI algorithms. According to NuriLab, traditional AI-based detection technology mainly relies on pixel analysis of images or videos, whereas metadata analysis can identify abnormal patterns in deepfake outputs, such as records of deepfake generation and modification, storage formats, and compression information.
Development of such detection solutions is expected to accelerate going forward. With the spread of Generative AI, awareness of social harm caused by false and manipulated information is rising. A security industry official said, "As Generative AI technology spreads rapidly, distinguishing the authenticity of content is emerging as an important task across society," adding, "Because content abused by AI is difficult for people to distinguish with the naked eye, it appears AI technology to judge it will be introduced as related data accumulates."