Illustration = ChatGPT

An artificial intelligence (AI) agent succeeded in hacking Stanford University's network in 16 hours. The protagonist was Stanford researchers' homegrown AI hacking agent "Artemis." With skills surpassing human white-hat hackers, Artemis found a vulnerability in the university's systems, penetrated them, and shocked the security industry.

According to the security industry on the 17th, Artemis competed against 10 professional penetration testers in a red-team exercise to find security vulnerabilities in Stanford's network and outperformed nine of them, finishing second overall. The hacking took a total of 16 hours, and running Artemis cost $18 per hour (about 26,500 won).

Given that professional penetration testers earn an average annual salary of about $125,000 (about 180 million won) and cost $60 per hour (about 88,000 won), the researchers emphasized that Artemis clearly has advantages in expense and performance. The expense of the advanced version of Artemis is $59 per hour, making it more economical than hiring a top human expert.

The researchers said Artemis could examine multiple vulnerabilities at once, making it easier to find weaknesses humans missed. For example, in the case of one legacy server, access was blocked in the latest browsers and participants failed to find a vulnerability, but Artemis bypassed the block, broke in, and discovered it. However, Artemis was weak at tasks requiring clicks in a graphical interface and was more prone to mistaking harmless network messages as signs of a successful intrusion.

The researchers recorded the results of this experiment in a paper released on the 10th. The study suggests that AI has emerged not just as a helper or assistant to support security experts but as a competitor with equal or superior skills, while also suggesting that hackers can now easily access powerful hacking tools at low cost.

The rapid development and spread of AI technology are shaking up the cybersecurity industry. As AI evolves to judge and lead attacks on its own, an era has opened in which AI automatically carries out hacking and defense without human intervention. Non-experts can pay to hack, lowering the barrier to entry for cybercrime, and AI hacking agents can run dozens of attacks simultaneously around the clock, increasing the frequency, speed, and intensity of hacks targeting corporations and governments.

In fact, a hacker group backed by the Chinese government manipulated Anthropic's AI tool Claude for cyberattacks in September and attempted hacks targeting more than 30 corporations and institutions. In a hacking analysis report released on the case, Anthropic said, "In this case, AI automatically carried out 80%–90% of the attack, with little human involvement," and noted, "At the peak, the AI generated thousands of requests, and it is virtually impossible for a human hacker to match such attack speed." Anthropic said it blocked the hackers' account as soon as it identified the attack pattern and halted the assault, but the hackers had already succeeded in four intrusions.

Kimsuky, a hacker group believed to be backed by North Korea, was detected the same month attempting spear phishing—creating a fake military ID with ChatGPT and targeting military-related agencies. They also used Generative AI to craft fake resumes and technical test answers for disguised job applications to overseas IT corporations. Google said last month that hackers linked to the Russian government used Gemini to generate customized malware in real time to attack Ukraine.

As cases of popular Generative AI models being abused for cyberattacks increase, major AI corporations said they will expand security investments by implementing safeguards and detection systems to prevent misuse. OpenAI this month released a statement saying, "We will design so that our cutting-edge AI models are used for defense, not hacking, and focus on limiting performance amplification for malicious purposes."

Security experts say that with AI-driven cyberattacks expected to intensify, there is an urgent need for domestic corporations and institutions to shift to more proactive and preemptive "offensive defense." U.S. security company Fortinet said, "Corporations must establish machine-speed defense systems that reduce threat detection and response time from 'hours' to 'minutes.'" The point is that as AI-based hacking speeds up, defense systems must keep pace.

Lee Dae-hyo, Deputy Minister at security company Genians, said, "Defense systems must evolve as quickly as threats do to counter the even more intense cyberattacks ahead," adding, "Corporations and institutions should build proactive defense strategies based on network access control and endpoint detection and response."

Another security company, SECUI, said, "The spread of Generative AI has rapidly elevated existing threats such as deepfakes, customized malware, and sophisticated phishing," and predicted, "In 2026, as AI spreads across offense and defense, cybersecurity will enter a full-fledged 'AI versus AI' competitive landscape."

※ This article has been translated by AI. Share your feedback here.