Every time you log into a banking app, send an encrypted message, or complete a credit card transaction online, you are trusting cryptographic libraries to keep that data locked. In April 2026, Anthropic announced that its newest AI model, Claude Mythos, had found what the company described as thousands of previously unknown security vulnerabilities in production software, including flaws in the very cryptographic code that underpins those everyday protections.
The claim has divided the cybersecurity world. Some researchers see a generational leap in defensive capability. Others see a tool that could hand attackers a roadmap to the most sensitive systems on the internet. What follows is what we know, what we do not, and what organizations should be doing right now.
What has been confirmed
Three core developments are supported by Anthropic’s public statements and corroborating coverage from multiple outlets.
First, Mythos does not just flag potential weaknesses. According to The Hacker News, the model generates working exploit code that demonstrates exactly how an attacker could take advantage of each flaw. That puts it in a fundamentally different category from conventional static-analysis scanners, which typically identify suspicious code patterns without proving they are exploitable. Anthropic has repeated the “thousands of zero-days” figure, though no independent security firm has published a verified count as of May 2026.
Second, Anthropic created a program called Project Glasswing to channel Mythos toward defensive work. Rather than releasing the model as a standalone product, the company says it is partnering with major cloud providers and security firms to audit and harden software that runs critical infrastructure. No named partner has independently confirmed its participation on the record; the partnership claims originate from Anthropic’s own announcements.
Third, the vulnerabilities Mythos flagged include flaws in cryptographic protocols and libraries. These are the building blocks of encryption: the code behind HTTPS connections, encrypted databases, digital signatures, and the secure channels that protect everything from hospital patient records to classified government communications. Anthropic has confirmed that Mythos can both find and exploit bugs in these systems. A single flaw in a widely adopted crypto library could cascade across millions of systems at once, which makes the window between discovery and patch deployment extraordinarily high-stakes.
What remains uncertain
The sharpest unresolved question is who actually gets to use Claude Mythos and under what rules. Forbes reported that Anthropic will not let anyone use the model, framing it as too dangerous for open deployment. Days later, Fortune described a different picture: the company was granting select firms early access specifically to strengthen cybersecurity defenses. Neither report has been retracted.
The most plausible reading is that Anthropic is blocking general public access while allowing controlled, supervised use by vetted partners through Project Glasswing. But the company has not published a detailed access framework, licensing structure, or list of approved organizations. Basic questions remain open: does “early access” mean full model weights, API-only interaction, or a managed service where Anthropic’s own engineers run scans and share results?
A second gap involves the specific cryptographic flaws. No Common Vulnerabilities and Exposures (CVE) listings have been published to confirm which libraries or protocols are affected, how severe the flaws are, or which software packages need patching. The cybersecurity community depends on coordinated disclosure, where the discoverer shares technical details with the affected vendor before going public, to prevent exploitation during the patch window. Without published CVEs, outside researchers cannot verify Anthropic’s claims or begin independent remediation. References to financial transactions, government communications, and personal data reflect the potential impact of flaws in widely used crypto libraries rather than confirmed exposure.
A third open question is whether Project Glasswing’s defensive framing will hold over time. The same AI that writes exploit code for defenders could, in principle, be replicated by state-sponsored hacking groups or criminal organizations training their own vulnerability-hunting models. Anthropic has acknowledged this tension publicly, but no binding policy document or technical safeguard has been described that would prevent misuse if the model’s architecture or training approach were reverse-engineered.
Also notably absent: any public statement from CISA, CERT/CC, or equivalent international agencies about whether they are coordinating with Anthropic on disclosure. For a discovery of this reported scale, the silence from government cybersecurity bodies is itself worth watching. Typically, when a vulnerability of critical severity is identified in widely deployed software, agencies like CISA issue advisories and coordinate patch timelines with vendors. That process either has not started, is happening behind closed doors, or does not apply because the claims have not been independently validated.
How to weigh the evidence
The strongest evidence here comes from Anthropic’s own actions rather than from independent validation. The company built the model, named it, launched a branded initiative around it, and began distributing access to partners. Those are corporate decisions that can be confirmed through product pages and press coverage. But the headline claim, that Mythos found thousands of zero-day vulnerabilities, traces back to Anthropic’s own statements. No third-party security firm has published an independent count or confirmed the figure through its own testing.
That distinction matters. When a company announces that its own product discovered a large number of flaws, the incentive to overstate results is real. Anthropic has a commercial interest in positioning Mythos as uniquely capable, and “thousands of zero-days” is precisely the kind of headline-ready statistic that attracts enterprise customers to a paid security initiative. The number may well be accurate, but it sits in a different evidentiary category than a peer-reviewed audit conducted by an independent lab.
Coverage from outlets including NDTV confirms the broad contours: Mythos exists, it targets software security flaws, and Anthropic treats it as both a breakthrough and a risk. But none of the available reporting includes direct quotes from the hyperscalers or security firms allegedly participating in Project Glasswing.
Context helps calibrate expectations. AI-assisted vulnerability discovery is not brand new. In November 2024, Google’s Project Zero and DeepMind announced that an AI agent called Big Sleep had found a previously unknown, exploitable vulnerability in SQLite, a database engine embedded in billions of devices. DARPA’s AI Cyber Challenge has been pushing teams to build automated vulnerability discovery systems since 2023. What Mythos reportedly adds is scale: not one vulnerability but thousands, and not just detection but working exploit generation. If that claim holds up to independent scrutiny, it represents a significant escalation in what AI can do to software security, for better and worse.
The concern that several outlets have labeled as “red flags” boils down to a straightforward problem. Traditional vulnerability research requires deep expertise and significant time. If an AI can compress that process from weeks to hours, it lowers the barrier for defenders and attackers alike. The defensive benefit is real, but the offensive risk scales in the same direction.
What security teams should do now
Organizations that rely on standard cryptographic libraries, whether OpenSSL, libsodium, BoringSSL, or platform-specific implementations in AWS and Azure, should not wait for CVEs to start preparing.
The practical first step is straightforward: audit which cryptographic libraries are in active use across your stack, including transitive dependencies that may pull in vulnerable code without your team realizing it. Confirm that patch management pipelines can respond quickly once CVEs are published. Teams that have not tested their ability to deploy emergency patches across production systems within 24 to 48 hours should treat this announcement as the trigger to run that exercise.
Beyond patching, this is a good moment to review cryptographic agility, the ability to swap out one encryption algorithm or library for another without rebuilding entire systems. Organizations that hardcoded a single library deep into their infrastructure years ago will face the longest remediation timelines if that library turns out to be affected.
The larger takeaway extends beyond any single model. AI-driven vulnerability discovery has moved from research demonstrations to claimed production-scale results. Whether Mythos lives up to every claim or not, the competitive pressure it puts on other frontier labs to build similar tools changes the calculus for every organization that stores sensitive data behind encryption. The question is no longer whether AI will find flaws in cryptographic systems. It is how fast defenders can close the gaps before the next model, from Anthropic or anyone else, finds them first.