In a development that sent shockwaves from Silicon Valley to Wall Street, Anthropic's latest frontier model — Claude Mythos Preview — has done something no security researcher, penetration tester, or bug-bounty hunter managed to do in nearly three decades: autonomously discover thousands of critical zero-day vulnerabilities buried inside the software that runs the modern internet.

The findings, disclosed by Anthropic on April 7, 2026 alongside the launch of Project Glasswing, are simultaneously a landmark achievement in AI capability and one of the most sobering threat-assessments ever delivered to the technology industry.

The AI That Hunts Bugs Like a Nation-State

Claude Mythos Preview did not just scan for known issues — it found new ones. According to Anthropic's red-team disclosure, the model autonomously identified critical zero-days in every major operating system (Linux, Windows, macOS, FreeBSD, OpenBSD) and every major web browser, then proceeded to build working exploits for them.

Among the most striking discoveries: a denial-of-service vulnerability in OpenBSD's TCP SACK implementation that had existed undetected for 27 years — an integer overflow condition allowing a remote attacker to crash any OpenBSD host over a standard TCP connection. The flaw had survived countless audits, code reviews, and academic analyses since its introduction in 1999.

The exploit demonstrations were even more alarming. Mythos Preview authored:

  • A browser exploit chaining four separate vulnerabilities into a JIT heap spray that escaped both the renderer sandbox and the OS sandbox
  • A Linux privilege escalation leveraging race conditions and KASLR bypasses to achieve full root access
  • A FreeBSD NFS remote code execution exploit granting unauthenticated users complete root control of affected servers

Anthropic's internal red team report noted that over 99% of the vulnerabilities discovered have not yet been patched. The sheer scale of what Mythos found — across commercial and open-source software alike — has forced a fundamental re-evaluation of what AI-assisted cyberattack capability looks like in 2026.

Wall Street Called an Emergency Meeting

The implications reached well beyond the tech industry. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell jointly convened an emergency session at Treasury headquarters in Washington, summoning the CEOs of Bank of America, Citigroup, Goldman Sachs, Morgan Stanley, and Wells Fargo to personally assess the systemic cyber exposure that a model like Mythos represents.

Financial infrastructure — payments networks, clearing houses, trading systems — runs on many of the same operating systems and network stacks that Mythos had just silently dissected. The meeting, described by Fortune as unprecedented in its urgency, signals that AI-powered vulnerability discovery is now a macroeconomic concern, not merely a software engineering one.

Project Glasswing: $100 Million to Heal What AI Exposed

Anthropic was quick to pair the disclosure with a constructive response. Project Glasswing, launched simultaneously, is a controlled-access defense coalition built around Mythos Preview — turning the same capability that found these vulnerabilities toward fixing them.

The initiative launched with 12 founding partners, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, with more than 40 additional critical infrastructure organizations participating in a broader ring of access.

Anthropic is committing up to $100 million in Mythos Preview usage credits to Project Glasswing participants, along with $4 million in direct donations to open-source security foundations. The goal: use the AI defensively, at scale, to surface and patch vulnerabilities before malicious actors — state-sponsored or otherwise — can exploit them.

The Dual-Use Dilemma at Its Sharpest

Anthropic has made Mythos Preview available only to a tightly controlled group of defenders, specifically to prevent the exploit capabilities from being weaponized. The company is restricting access under a framework it calls a "jagged frontier" — acknowledging that the model sits at the precise edge where AI transitions from security research tool to potential cyberweapon.

Simon Willison, writing about Project Glasswing, noted that the controlled-access model "sounds necessary" given what the model can do: it can take any piece of software, reason about its internals at a depth no human analyst could sustain across thousands of codebases simultaneously, and produce a working, weaponizable exploit. The question of who holds that capability matters enormously.

What This Means for the Industry

For software developers, open-source maintainers, and security teams, the message is clear: the era in which bugs could hide in aging codebases indefinitely — surviving because human reviewers have finite time and attention — is ending. An AI that can audit millions of lines of code with the rigor of a nation-state hacking unit, at a fraction of the cost, changes the economics of both offense and defense permanently.

The 27-year-old OpenBSD bug is not just a technical anecdote. It is a symbol of everything that has been quietly wrong in software security for decades — and a signal that AI has the potential to finally, systematically, fix it.

Whether the industry can move fast enough to patch what Mythos has already found — before others find it independently — may be one of the defining security challenges of 2026.


Sources: Anthropic Project Glasswing | Anthropic Red Team Disclosure | The Hacker News | Fortune | PC Gamer