Google logo /Courtesy of Yonhap News

Google said it preemptively blocked attempts by hackers who were preparing large-scale cyberattacks using artificial intelligence (AI).

Google Threat Intelligence Group (GTIG) said in its AI risk tracking report published on the 11th (local time), "We discovered zero-day exploit code that appears to have been developed with the help of AI by a hacker group," adding, "This is the first known case of a threat actor using AI to successfully develop a zero-day." A zero-day attack is a cyberattack that exploits an unknown vulnerability and occurs before a security patch is applied.

GTIG said, "The organization planned to use it for a large-scale attack, but our preemptive response may have prevented its actual use." The hackers who attempted the attack was found to have tried to bypass "two-factor authentication" by exploiting software vulnerabilities. However, Google believes its own AI model Gemini was not used in this attack.

As AI-powered vulnerability detection capabilities have advanced recently, concerns have been raised that they could be misused for zero-day attacks, and a similar attempt has now materialized.

Google also assessed that hacking groups backed by North Korea and China are already actively using AI. GTIG explained, "North Korean threat group APT45 verified thousands of exploit codes using AI and built up its attack asset (arsenal) on a large scale."

Attackers were also found to be using AI agent tools like OpenClo. For example, a China-linked threat actor carried out autonomous and persistent reconnaissance attacks using an agent tool to find vulnerabilities in a Japanese technology corporations.

Several threat groups, including China-linked cyberespionage outfit UNC5673, was found to be attempting to gain access to the privileges of the latest large language models (LLMs) using sophisticated technical methods. GTIG said, "They use specialized middleware (identity-laundering tools) and automated account registration programs to access high-performance AI model services anonymously," adding, "By doing so, they bypass model usage limits, cover operating expense, and misuse large-scale AI services for offensive activity."

John Hultquist, GTIG's principal analyst, stressed, "The vulnerability war driven by AI has already begun," adding, "Because threat actors are using AI in many ways to increase the speed, scale and sophistication of attacks, we must never underestimate the AI threat not only from state-backed actors but also from cybercrime groups."

※ This article has been translated by AI. Share your feedback here.