Amodei Dario, CEO of Anthropic./Courtesy of Yonhap News
"Mythos will change the landscape of cybersecurity."

Artificial intelligence (AI) corporation Anthropic has released its cutting-edge AI model "Mythos" to only a limited number of corporations. Anthropic said Mythos is "the most powerful AI model we have developed so far." The company noted it cannot release the model to the public because its outstanding ability to detect security vulnerabilities in software could be misused for hacking and the like.

As Anthropic, which has unveiled various AI agent-based coding tools to the public including the AI model "Claude," opted for an unusually closed strategy, the market is offering a range of interpretations. Opinions vary from projections that Mythos will be a "watershed moment" that will change the cybersecurity industry's landscape, as Anthropic claims, to analysis that it is a strategy to boost its valuation ahead of this year's initial public offering (IPO).

Anthropic announced on the 7th (local time) that it launched "Project Glasswing," under which it is piloting its top-tier AI model "Claude Mythos Preview" with 12 big-tech corporations and about 40 selected institutions. Participants in Project Glasswing, including Google, Apple, Amazon, Broadcom, Cisco, Microsoft, Nvidia, JPMorgan Chase, Palo Alto Networks and others, plan to use Mythos to identify security vulnerabilities in their software and fend off cyberattacks. Anthropic said it will share the results across the industry.

Mythos is a general-purpose model that surpasses the performance of Anthropic's top-tier "Claude Opus," and it is considered strong at detecting software defects (bugs). Anthropic explained that "Mythos has already reached a level where its ability to find and exploit security vulnerabilities exceeds that of most people except top experts," and said it decided to restrict access to certain corporations so that hackers or criminal groups cannot misuse it.

In the AI era, attackers and defenders wage a spear-and-shield fight using the same AI tools, and Anthropic says it will first provide Mythos to big tech and cybersecurity corporations so defenders can gain the upper hand. Even now, hackers are weaponizing AI to automate cyberattacks and increase the number of attacks to unprecedented levels. Hacking that used to take months has recently dropped to minutes or even seconds. Anthropic argued that cybersecurity infrastructure cannot keep up with the pace of AI development and that projects like Project Glasswing should give major corporations time to craft defense strategies.

According to Anthropic, Mythos used advanced reasoning to find thousands of vulnerabilities in recent weeks. A representative case was a bug in the OpenBSD operating system that had gone undetected for 27 years. It also found a vulnerability that had lurked for 16 years in widely used video software—a flaw missed despite running automated task tools more than 5 million times.

Security industry sources said Mythos's differentiator is not vulnerability detection but the ability to link vulnerabilities. According to materials released by Anthropic, Mythos did more than simply find bugs: it independently created powerful exploit code by linking four vulnerabilities to penetrate complex systems. Tasks that once required advanced hacking skills and specialized personnel can now be easily done by non-experts with AI.

Anthropic said its previous top-tier model, "Opus 4.6," excels at identifying and fixing vulnerabilities but is weak at exploiting them. In internal tests, Opus 4.6 succeeded in only two out of hundreds of autonomous attempts to develop exploits, showing a success rate close to 0%. Mythos, however, generated 181 exploits in the same evaluation, 29 of which were powerful enough to gain system control privileges.

As concerns spread that AI models like Mythos will replace existing cybersecurity solutions, the share prices of major security corporations collectively weakened last week. On the 9th (local time) in U.S. trading, Fortinet fell 3.4% and Zscaler plunged 11%. Project Glasswing participants Palo Alto Networks and CrowdStrike also fell 3.9% and 7.5%, respectively.

However, some experts say that because the information Anthropic released is limited, the company's claim that "its performance is so strong that releasing it to the public would be dangerous" should not be taken at face value. Heidi Klaf, senior AI scientist at the AI Now Institute at New York University, warned on social media X that "the materials Anthropic released did not provide sufficient key verification information such as false positive rates," adding, "Without additional information, we should not simply take Anthropic's claims on trust."

Some observers say Anthropic, which develops its business for corporate clients, chose a closed strategy to strengthen contracts with large corporations while preventing competitors from using the "distillation" technique to build similar models in a short period.

Distillation is a method of building a model with similar performance by using answers from another AI model as training data. Anthropic has argued that it needs to block competitors' distillation attempts, saying Chinese AI corporations have been illicitly extracting its models' outputs through distillation.

An AI industry source said, "This is an attempt to widen the gap with competitors or smaller labs by allowing access to the top-tier model only through corporate contracts," adding, "By the time Mythos is released to the public, they will already roll out a more advanced higher-end model exclusively for corporations."

Anthropic has grown rapidly by targeting the enterprise AI market. Its recent annual recurring revenue (ARR) surpassed $30 billion (about 44.4 trillion won), up roughly threefold from $9 billion at the end of last year. With corporate clients accounting for about 80% of total revenue, observers say the company will continue to strengthen its business for corporations to maintain a stable income stream.

Some also say Anthropic lacks the compute capacity to release a top-tier model like Mythos to the public. In a recent memo to investors, OpenAI said, "OpenAI has an edge over Anthropic in securing compute," and pointed out that Anthropic's recent decision not to release Mythos to the public and to grant access only to some large corporations was due to a shortage of compute resources.

※ This article has been translated by AI. Share your feedback here.