"Is this photo real, or was it made by artificial intelligence (AI)?"

On the 8th at the National Forensic Service headquarters in Wonju. Researcher Im Seong-ho held out a photo of a black-haired woman wearing heavy makeup and asked. At a glance, it looked like a real person's photo that had been touched up with Photoshop or another editing program. It was not easy to conclude that AI had made the image.

Researcher Lim Jeong-ho of the National Forensic Service demonstrates an AI deepfake analysis model at the Wonju headquarters on the 7th. /Courtesy of Yoon Hee-hoon

Researcher Im uploaded the photo file to the National Forensic Service's self-developed "AI deepfake analysis model." About three seconds later, the analysis result appeared on the screen. It was judged to be an image generated by AI. Deepfake is a portmanteau of deep learning and fake, referring to AI-based photos and videos that synthesize people's faces.

Researcher Im said, "Not only photos but also videos and audio files can be quickly checked to determine whether they are AI-made deepfakes," adding, "The accuracy is about 98%."

The National Forensic Service's "AI deepfake analysis model" analyzed and learned the features of more than 1 million AI-generated videos and images. Through this, it can identify pixel-level characteristics of AI-made images, the agency said.

Researcher Im said, "Through Deep Learning, we analyze the face in the image and find subtle features that are hard for people to perceive," but added, "It is difficult to logically explain exactly what algorithm is used to judge whether AI created it." He added, "We currently analyze mainly faces, but we plan to further advance our analysis by learning backgrounds and surrounding objects as well."

AI video generation technology is advancing so fast that it is hard for ordinary people to tell with the naked eye what is real. Researcher Im said, "It will reach a level where it is almost impossible to distinguish by eye," adding, "Because it can be abused for fraud or spreading false information, the importance of analysis tools that can screen them is also growing." Im said, "As AI technology advances, the analysis model must continue to learn and be upgraded accordingly."

This analysis model was also used during last year's presidential election process. That was because false videos defaming candidates spread through online platforms such as YouTube. They included videos in which statements never actually made were spoken in the candidate's voice.

Researcher Im said, "At the request of investigative agencies, we conducted 13 appraisals of deepfake election videos," adding, "By sharing the analysis model with the National Election Commission, we helped detect and delete more than 10,000 illegal deepfake election items distributed online." Investigative agencies expect the analysis model to play an important role in this year's local elections as well.

On July 30 last year, an official from the Ministry of the Interior and Safety introduces the AI deepfake analysis model developed by the National Forensic Service. /Courtesy of Ministry of the Interior and Safety

More recently, the model's ability to determine whether an audio recording has been re-edited has also drawn attention. Researcher Im said, "Lately there have been many 'replay attack' attempts," and showed an example. A replay attack refers to a method in which a script is pre-recorded on a mobile phone and then played back sequentially and re-recorded through a microphone.

Researcher Im said, "Voices also have waveforms that change over time frame by frame, but recordings subjected to replay attacks do not have consistent waveforms," adding, "By learning from countless audio files, we have identified the irregular waveforms of AI-generated audio files and human-reedited audio recordings."

We asked, "Couldn't the deepfake analysis model be opened to the private sector? If collective intelligence used the analysis tool to train big data, couldn't we stop the spread of already distributed deepfake videos?"

Researcher Im said, "There have actually been many requests to open the analysis model," adding, "If we open it, there are many concerns." Im said, "The moment we open the model, people who want to abuse it will try to use it as an AI verification tool," adding, "They could figure out the model's decision algorithm and produce deepfake videos in ways the analysis model cannot filter."

※ This article has been translated by AI. Share your feedback here.