As worries grow over cyberattacks abusing artificial intelligence (AI) following the release of Anthropic's AI model "Claude Mythos," a projection has emerged that such hacking will become routine within months.
Lee Klarich, chief technology officer (CTO) of the cybersecurity company Palo Alto Networks, said in a report published on the 13th (local time) that "we estimate only three to five months remain before AI-driven vulnerability exploitation becomes the new normal."
According to the report, tests of the latest AI models—including Anthropic's "Claude Mythos" and "Claude Opus 4.7," and OpenAI's "GPT-5.5-Cyber"—showed that their ability to identify security vulnerabilities and turn them into exploits is far more powerful than expected.
Klarich said, "Even just a few weeks ago, I wondered, 'Are we overestimating the model's capabilities?'" adding, "After extensive testing, we can say clearly that we are not overestimating them."
Palo Alto Networks said that after testing AI models' vulnerability detection performance across about 130 of its products, it found a total of 26 vulnerabilities this month alone—more than five times the usual monthly average of about five. It added that none of the newly identified vulnerabilities have yet been found to be exploited in real-world attacks.
Palo Alto urged corporations to act before attackers widely adopt these techniques, recommending proactive identification of AI-enabled vulnerabilities, reduction of attack paths, establishment of defense frameworks, and real-time security operations.
Anthropic unveiled Claude Mythos last month, equipped with expert-level software (SW) vulnerability detection capabilities, but, citing concerns over potential abuse, decided to provide it first to major corporations and institutions. Since then, governments including the United States, financial institutions, and IT corporations have been holding emergency meetings and weighing responses as security concerns have intensified following Mythos's development.