The North Korean reconnaissance agency–affiliated hacker group KimSuky (Kimsuky) produces a fake military ID example using ChatGPT. /Courtesy of Genians

North Korean hackers are using generative artificial intelligence (AI), including OpenAI's ChatGPT and Anthropic's Claude, to advance their hacking techniques. Recently, signs that they launched cyberattacks using AI-made deepfake photos were detected in Korea for the first time.

Kimsuky, known as a hacker group under North Korea's Reconnaissance General Bureau, attempted spear phishing—an attack targeting specific victims—against military-related agencies in Jul. using a fake military employee ID created with ChatGPT, security company Genians said on the 15th. Kimsuky sent emails disguised as requests to review an ID card draft, and the photo attached in the draft was a deepfake image synthesized by AI. A compressed file titled "ID card draft" attached to the email contained malware capable of stealing data from the recipient's device.

Because a military employee ID is a public identification card strictly protected by law, producing a copy in the same or similar form as the real one constitutes an illegal act. Therefore, when asked to create an ID copy, ChatGPT answers that it is "not possible." However, if the prompt or "AI persona" role settings are adjusted to elicit a response from the AI model, generating a forged ID becomes possible. Genians estimated that the attackers likely created the deepfake military ID by requesting "a virtual design for a lawful draft or sample purpose," rather than asking to duplicate a military employee ID.

Genians said, "Particular caution is needed because producing counterfeit IDs with generative AI is not technically difficult," and analyzed that as deepfake image creation becomes this easy, "more sophisticated attacks become possible through topics or decoys related to relevant duties." It emphasized that "advance preparation and continuous security checks are needed across hiring, work, and operations within organizations, taking into account the potential abuse of AI."

Anthropic, an AI startup regarded as a challenger to OpenAI, claimed in a "threat intelligence report" published last month that North Korean hackers used its AI model Claude to disguise themselves as remote workers and get hired by U.S. Fortune 5000 technology corporations, pocketing high salaries and using the money for weapons development. They created sophisticated fake identities with AI, completed technical and coding assessments in the hiring process, and after being hired, relied entirely on AI for their actual work. Anthropic said, "North Korean hackers have carried out such employment scams in the past, but until now they had to undergo years of specialized training, so the regime's capacity to train personnel acted as a major bottleneck; AI has removed these constraints," adding, "Now even those who cannot write basic code or communicate in English can use AI to pass technical interviews and continue their work."

Cyberattack tactics by North Korea-linked hacker groups through employment scams are becoming increasingly rampant. According to an analysis by cloud security company CrowdStrike, North Korean hacker group "Famous Chollima" infiltrated more than 320 corporations last year alone by disguising themselves as software developers and getting hired by large corporations in North America, Western Europe, and East Asia. That figure is up 220% from the previous year. The report said Chollima continued to expand insider threats by distributing malware within the corporations where they had gained employment. In addition, U.S. security outlet The Record raised suspicions that a China-backed hacker group or Kimsuky hacked domestic mobile carriers including KT and LG Uplus, prompting the Personal Information Protection Commission to launch an investigation.

Beyond corporations, attacks targeting developer platforms are also continuing. According to Talon, the threat intelligence center of security company S2W, Kimsuky recently continued to abuse the developer platform GitHub by distributing malware through repositories on the platform.

Major North Korea-backed hacker groups such as Kimsuky, Famous Chollima, and Lazarus have carried out cyberattacks against South Korean government ministries including the Ministry of Unification and the Ministry of Foreign Affairs, corporations, state-run energy corporations, and media outlets, and in recent years have expanded their operations to global markets such as the United States and Europe. Their primary objectives are known to be information theft and earning foreign currency. They have also targeted major cryptocurrency exchanges to raise funds needed for North Korea's nuclear weapons development.

Security companies and major foreign media have noted that with the advent of AI, the number of attacks by North Korean hacker groups is increasing and their hacking methods are becoming more sophisticated and intelligent. In response, Korea, the United States, and Japan issued a "trilateral statement on North Korean IT personnel" late last month. In the statement, the three countries expressed concern over the malicious activities of North Korean hacker groups, saying, "North Korea, in violation of Security Council resolutions, dispatches IT personnel around the world to generate revenue and uses it to fund the development of illegal weapons of mass destruction (WMD) and ballistic missiles."

They added, "North Korean IT personnel use AI technology to disguise their identities and locations with fake profiles and employ various methods such as collaborating with overseas facilitators," and "they are securing freelance employment contracts from an increasing number of clients in North America, Europe, and East Asia by capitalizing on demand for skilled IT capabilities, particularly frequently in the blockchain industry."

※ This article has been translated by AI. Share your feedback here.