"Is this photo real, or was it made by artificial intelligence (AI)?"
On the 8th at the National Forensic Service headquarters in Wonju. Researcher Im Seong-ho held out a photo of a black-haired woman wearing heavy makeup and asked. At a glance, it looked like a real person's photo that had been touched up with Photoshop or another editing program. It was not easy to conclude it was an AI-generated image.
Researcher Im uploaded the photo file to the National Forensic Service's in-house "AI deepfake analysis model." About three seconds later, the analysis result appeared on the screen. It judged the image to be AI-generated. Deepfake is a portmanteau of deep learning and fake, referring to photos and videos that synthesize human faces based on artificial intelligence.
Im said, "Not only photos but also videos and audio files can be quickly checked to determine the authenticity of AI-made deepfakes," adding, "The accuracy is at the 98% level."
The National Forensic Service's "AI deepfake analysis model" learned features by analyzing more than 1 million AI-generated videos and images. Through this, it can identify pixel-level characteristics of AI-made images, the agency said.
Researcher Im Seong-ho received a prime minister's commendation at the "11th Korea Public Officials Award" on the 8th in recognition of contributions to developing the AI deepfake analysis model.
Im said, "While analyzing faces in images through Deep Learning, it detects minute features that are hard for people to perceive," but added, "It is difficult to logically explain exactly which algorithm determines whether AI made it." He added, "Currently we focus on faces, but going forward we plan to train on backgrounds and surrounding objects to further advance the analysis capability."
AI video generation technology is advancing so fast that the general public can hardly tell with the naked eye whether something is real. Im said, "It will reach a level where it is almost impossible to distinguish by eye," adding, "Given the high likelihood of abuse for fraud or spreading false information, the importance of analysis tools that can detect it is also growing." Im said, "As AI technology advances, the analysis model must continue to learn and be upgraded accordingly."
The analysis model was also used during the presidential election last year. That was because false videos disparaging candidates spread through online platforms such as YouTube. They included videos in which statements never actually made were spoken in a candidate's voice.
Im said, "At the request of investigative agencies, we conducted 13 appraisals of deepfake election videos," adding, "By sharing the analysis model with the National Election Commission, we helped detect and delete more than 10,000 illegal deepfake election items distributed online." Investigative agencies believe the analysis model will play an important role in this year's local elections as well.
More recently, the model's function to determine whether an audio recording has been re-edited is also drawing attention. Im said, "Lately there has been a rise in 'replay attack' assaults," and showed an example. A replay attack is a technique in which a recording is made on a mobile phone according to a prearranged script and then replayed sequentially and re-recorded through a microphone.
Im said, "Voices also have waveforms that change over time frame by frame, but a recording file subjected to a replay attack does not have a consistent waveform," adding, "By training on countless audio files, it has identified the irregular waveforms of AI-generated audio files and of human-reedited audio recordings."
We asked, "Could the deepfake analysis model be opened to the private sector? By using the analysis tool through collective intelligence, could we train big data and stop the spread of already-distributed deepfake videos?"
Im said, "There have actually been many requests to open the analysis model," adding, "There are many concerns if it is opened." Im said, "The moment the analysis model is opened, those seeking to abuse it will try to use it as an AI verification tool," adding, "They could figure out the model's decision algorithm and produce deepfake videos in ways the analysis model fails to catch."