Derek Manky, Fortinet vice president of threat intelligence institutional sector, delivers a keynote at Fortinet Accelerate 2026 at the Grand Intercontinental Seoul Parnas on the 28th. /Courtesy of Fortinet Korea
"With the emergence of AI-based vulnerability detection and attack tools like Anthropic's Mythos, the number of security vulnerabilities worldwide is expected to exceed 100,000 next year."

Derek Manky, vice president of threat intelligence at Fortinet, said this on the 28th at Fortinet Accelerate 2026, held at the Grand Intercontinental Seoul Parnas in Samseong-dong, Seoul. He said, "Last year, 40,000 CVEs (Common Vulnerabilities and Exposures) were found worldwide, the highest level on record," adding, "With the emergence of automated tools like Mythos for discovering vulnerabilities and designing attacks, the number of CVEs is expected to exceed 100,000 in a year."

Claude Mythos Preview, unveiled by AI company Anthropic earlier this month, is an AI model that excels at finding and exploiting software security vulnerabilities. Access is currently limited to some corporations only, but the explanation is that if Mythos becomes mainstream, CVEs are expected to surge.

Manky assessed that the cyber security landscape is changing at an unprecedented pace as hackers weaponize AI. He said, "The global cybercrime industry is worth $11 trillion (about 1,620 quadrillion won), which would make it the third largest economy after the United States and China by gross domestic product (GDP)," and added, "The scope of attackers' activities and the attack surface are literally exploding."

He cited the industrialization of cybercrime and the spread of attacks on AI systems as notable phenomena in the current cyber threat environment. Manky said, "On the dark web, various CaaS (cybercrime-as-a-service) offerings are available at relatively low prices, highlighting a trend of so-called 'commoditization of cybercrime.'"

Examples include WormGPT and FraudGPT, built using the GPT AI model. Manky said, "These services can be used for a subscription fee of $1,000 (about 1.47 million won) or less, lowering the barrier to entry for cybercrime."

Chatbot-style cybercrime consulting tools are also pouring out. Attackers can enter questions and requests into a chat window as if talking to ChatGPT or Gemini to obtain the information they need. Manky said, "These large language models (LLMs) are trained on data that includes conversations among cybercriminals on the dark web," adding, "If an attacker asks, 'I'm trying to carry out a BEC (business email compromise) attack against the chief financial officer (CFO) of corporation A; what's the best way?' the model presents concrete methods—such as available crime services or attack tools—within one minute."

In particular, as AI technology is integrated into cyberattack services, expense is falling and attack cycles are getting shorter. Manky said, "Time to exploit (TTE), from vulnerability discovery to actual damage from an attack, has been reduced from five days two years ago to 24–48 hours today," adding, "This speed is continuing to accelerate, and a year from now it is expected to shrink to within 24 hours and down to minutes."

This means an environment is taking shape where AI-based attack automation leads to attacks and damage as soon as vulnerabilities are found, creating the need for a security framework to respond. He said, "In the past, a ransomware attacker would target a single objective, but now they can execute 10 attacks simultaneously," adding, "That's because there are many tools available at low expense, and alliances among major cyberattack groups have become more active."

Beyond existing software and IT infrastructure, AI systems themselves are emerging as a new attack surface. Manky said, "On top of the existing attack surface, the new attack surface of AI systems has appeared, making AI security a necessity, not a choice," adding, "The average damage from data leaks due to AI-based attacks amounts to $4.9 million (about 4.9 billion won) per corporation." That is because cyberattacks are becoming more sophisticated and evolving to target specific objectives.

Manky predicted that these attack techniques will become more advanced with the introduction of AI agents. He said, "Methods attackers use to target AI models are increasing, including model poisoning, model theft, and model inversion," adding, "In a year, agent-based worms—self-propagating AI-powered malware—will appear, and in about two years, more advanced forms called agentic swarms will emerge."

※ This article has been translated by AI. Share your feedback here.