Advanced artificial intelligence systems are rapidly reshaping the cybersecurity industry, but experts remain sharply divided over whether the technology represents a manageable evolution in security research or the beginning of a large-scale vulnerability crisis.
The debate escalated after Anthropic introduced Claude Mythos Preview, an experimental version of its language model that the company says demonstrates unusually strong performance in identifying software vulnerabilities and handling advanced cybersecurity tasks. Concerned about the possible risks of releasing such capabilities broadly, Anthropic restricted access to a limited initiative known as Glasswing, allowing only a select group of organizations to test the system while the security community prepares for the implications.
Since the announcement, discussions across the cybersecurity sector have centered not only on the model’s technical abilities, but also on whether restricting access to it is realistic at all. Reports surfaced this week suggesting unauthorized individuals may already have accessed the Mythos preview, raising concerns that attempts to tightly control the technology may prove ineffective once similar capabilities become reproducible elsewhere.
The industry’s reaction has largely fallen into three competing schools of thought.
One group believes AI-driven vulnerability discovery could overwhelm existing security infrastructure. Supporters of this view warn that highly capable models may dramatically increase the speed at which attackers uncover exploitable weaknesses, potentially leading to widespread cyber incidents before defenders can respond effectively. Analysts aligned with this perspective argue that the cybersecurity ecosystem is already struggling to keep pace with current levels of vulnerability reporting.
A second group has taken a more operational approach, focusing on how organizations can defend themselves if AI-assisted exploit discovery becomes commonplace. This position has been reflected in work published through the Cloud Security Alliance, where hundreds of chief information security officers collaborated on guidance discussing defensive strategies. However, even within this camp, some security professionals have criticized Anthropic’s rollout process, arguing that patch management and vulnerability remediation are far more complex than the company appears to acknowledge.
A third camp remains skeptical of the broader panic surrounding Mythos. Researchers associated with AISLE argued that the model’s capabilities are not entirely unique because similar vulnerability discovery results can already be reproduced using publicly accessible open-weight AI models. In one cited example, researchers reportedly recreated a FreeBSD exploit demonstrated during the Mythos announcement using multiple open models, including systems inexpensive enough to operate at minimal cost. The finding suggests that moderately skilled attackers may already possess access to comparable capabilities independent of Anthropic’s platform.
This debate arrives as the cybersecurity industry is already experiencing a dramatic increase in vulnerability disclosures. The National Institute of Standards and Technology recently adjusted how it processes entries for the National Vulnerability Database after reporting a 263 percent increase in submissions between 2020 and 2025, including a sharp rise within the past year alone. The agency stated that it would prioritize only the most critical Common Vulnerabilities and Exposures entries for enrichment, highlighting how existing human review systems are struggling to scale alongside the growing volume of reported flaws.
Some experts believe artificial intelligence is already contributing to that acceleration, even before systems such as Mythos become widely available.
At the same time, defenders argue that existing security architectures still provide meaningful protection. Anthropic’s own findings reportedly acknowledged that while Mythos could identify vulnerabilities, it was unable to remotely exploit many of them because layered security controls prevented deeper compromise. This concept, commonly referred to as “defense in depth,” relies on multiple overlapping safeguards designed to stop attackers even if one weakness is discovered.
Despite disagreements over the severity of the threat, there is broad consensus that AI-assisted vulnerability discovery will continue advancing. The larger disagreement centers on how the software industry should adapt.
Some researchers argue that attempting to restrict access to advanced models through programs like Glasswing may ultimately fail because comparable capabilities are increasingly emerging in open-source ecosystems. Others believe the long-term answer may resemble principles already established in modern cryptography.
The discussion frequently references the work of 19th-century cryptographer Auguste Kerckhoffs, who argued that secure systems should remain safe even if attackers understand how they operate, except for protected keys or credentials. Over time, cybersecurity researchers have increasingly adopted a similar philosophy in software security, where openly scrutinized systems often become more resilient because flaws are exposed and corrected publicly.
Supporters of this approach believe AI could eventually force the software industry toward more rigorously tested open-source infrastructure. Under such a future, software components would face continuous AI-driven scrutiny before gaining widespread trust. However, experts also caution that this transition would be difficult because many companies still depend on proprietary code to protect intellectual property and maintain competitive advantages.
Another striking concern involves economics. Much of the modern internet depends heavily on open-source software, yet relatively few organizations financially contribute to securing and auditing the projects they rely upon. Although AI models may simplify vulnerability discovery, the computational resources required to run these systems remain expensive. Analysts warn that access to large-scale vulnerability analysis may increasingly depend on who can afford the computing power necessary to operate advanced models.
Some researchers fear this imbalance could create repeating cycles of major cyberattacks followed by emergency patching efforts before the industry temporarily stabilizes again. Recent supply chain attacks affecting widely used software tools have reinforced concerns that large-scale exploitation campaigns may become more frequent as AI-assisted discovery improves.
The sharp turn of events could also redefine the cybersecurity market itself. Companies specializing in vulnerability discovery may face mounting pressure as AI automates portions of their work. By contrast, vendors focused on remediation and layered defensive protections may see increased demand as organizations attempt to strengthen prevention measures and respond more rapidly to emerging threats.
For users and organizations heavily dependent on open-source software, the transition period may prove particularly difficult. However, some analysts remain cautiously optimistic that continuous scrutiny from increasingly advanced AI systems could eventually produce stronger and more resilient software ecosystems over the long term.