Free 40-page Claude guide — download today
April 20, 2026Samarth at CLSkillsclaude mythosproject glasswinganthropic

Claude Mythos and Project Glasswing: Anthropic Just Released an AI That Found Zero-Days in Every Major OS

On April 7, 2026, Anthropic previewed Claude Mythos — a model that autonomously found and exploited a 17-year-old FreeBSD RCE and zero-days across every major OS and browser. It's not going GA. Here's what Project Glasswing is, who's in, and what it means for software security.

Claude Mythos: The Model Anthropic Isn't Releasing

On April 7, 2026, Anthropic quietly did something unusual for a frontier AI lab: they announced a new model — Claude Mythos Preview — and simultaneously said they would not make it generally available.

The reason is in the demo. Mythos autonomously identified and exploited a 17-year-old remote code execution vulnerability in FreeBSD (CVE-2026-4747) that gives attackers full control of affected servers. During testing, the same model found and exploited zero-day vulnerabilities in every major operating system and every major web browser.

Anthropic's response was Project Glasswing — restricted distribution to a small set of trusted partners (AWS, Apple, Google, JPMorgan Chase, Microsoft, Nvidia) to find and fix vulnerabilities in critical infrastructure BEFORE Mythos-class capabilities leak into the hands of attackers.

This is probably the most important AI-safety decision Anthropic has made to date. Here's what it actually means.

TL;DR

  1. Claude Mythos Preview is a new general-purpose model that's strikingly capable at computer security tasks.
  2. It found zero-days in every major OS and browser during Anthropic's internal testing.
  3. It fully autonomously exploited a 17-year-old FreeBSD RCE (CVE-2026-4747) — not guided, not suggested, autonomous.
  4. Anthropic will NOT release it publicly. No API access, no Claude.ai availability, no enterprise tier.
  5. Project Glasswing gives vetted partners (AWS, Apple, Google, JPMorgan Chase, Microsoft, Nvidia) early access to find and patch vulnerabilities in their infrastructure.
  6. This is the first time a frontier lab has released a model with stated "too dangerous for GA" capabilities and committed to restricted distribution upfront, rather than recalling after discovery.
  7. Claude Opus 4.7 and earlier models are UNAFFECTED. Your Claude Code, Claude.ai, and API workflows don't change.

Why This Is A Meaningfully Different Announcement

Frontier labs usually ship models and then figure out the safety story. GPT-4 was jailbroken within 72 hours of public release. Claude Opus 4.6 had its system prompt leaked within a week. Llama 3 405B became the foundation for uncensored open-source derivatives that now run offensive security tools in the wild.

Anthropic looked at Mythos's capabilities and said "we're not releasing this."

That's new. Not new in principle — Anthropic's Responsible Scaling Policy (RSP) has committed to restricted release for models with certain capability profiles since 2023. New in that they actually did it, with specific examples, and published the reasoning.

What Mythos Actually Does

Per Anthropic's technical preview:

  • Full-stack vulnerability research. Give it a codebase; it finds classes of bugs (memory safety, race conditions, input validation, auth bypasses) at a rate comparable to senior security researchers.
  • Autonomous exploitation. Not just "identifies a vuln" — actually writes working exploits that achieve RCE / privilege escalation / data exfiltration.
  • Cross-domain reasoning. Connects an OS-level bug to how it chains with a browser-level bug to how it survives mitigations at the application layer. This chaining is what makes it genuinely dangerous.
  • Novel zero-day discovery. Not just "find known bugs in new code" — finds previously-unknown vulnerabilities that existed in production software for years without being noticed.

The 17-year-old FreeBSD RCE demo is the concrete example everyone's pointing at. FreeBSD is an actively-maintained OS with a competent security team. That bug had been in the code for 17 years. Mythos found it, in isolation, autonomously.

Extrapolate. If Mythos can do that to FreeBSD, it can do that to every major enterprise software stack. Attackers with Mythos-class access would have a structural advantage over defenders.

Project Glasswing: The Restricted-Access Solution

Glasswing is Anthropic's attempt to get ahead of the problem. The logic:

  1. We can't un-invent this capability. Other labs will build Mythos-class models. Open-source efforts will catch up in 12-24 months.
  2. But we can buy time by using Mythos defensively first. If the most critical software in the world gets audited by Mythos and the bugs get fixed BEFORE the capability becomes widely available, the attack surface shrinks.
  3. Partner with the organizations that run the world's critical infra. AWS runs ~30% of cloud. Apple + Google run ~99% of consumer OS. Microsoft owns enterprise OS + productivity. Nvidia owns the AI hardware stack. JPMorgan Chase represents the financial sector's risk profile. If Mythos audits their code, the defensive baseline rises for everyone downstream.

The partner list tells you Anthropic's priorities: the platforms where a Mythos-level vulnerability would cascade the widest.

What's In The Partnership

Anthropic's public materials + the Fortune and TechCrunch coverage describe the Glasswing partnership as:

  • Access to Mythos Preview for the partner's internal security teams
  • Structured vulnerability-discovery process — partners feed their code; Mythos reports; Anthropic + partner coordinate disclosure
  • Patch-first-publish-later protocol — vulnerabilities found via Glasswing get 90-180 days of private remediation before any disclosure
  • Shared learnings — partners contribute findings to a pooled research body so improvements at one partner benefit the others
  • No access to model weights. Partners get API access, not the raw model. Anthropic retains operational control.

What This Means For Regular Claude Users

If you use Claude.ai, Claude Code, Claude API, or Claude Cowork — nothing changes.

Mythos is a separate model lineage from Opus / Sonnet / Haiku. Your existing workflows keep running on the models you've been using. Opus 4.7 is still the flagship for general use. Mythos isn't a "better Opus" — it's a specialized capability profile that Anthropic has decided not to offer to the general market.

You will not be able to access Mythos through API keys, Pro subscriptions, Claude Code, or any consumer product. Anthropic has been clear: not now, not later, not with an enterprise NDA. The only path is Glasswing partnership.

What This Means For Security Teams

If you work in an AppSec / product security / red team role:

  1. Glasswing partners are about to produce a lot of CVEs in the next 3-6 months. Expect a wave of security advisories from AWS, Apple, Google, Microsoft, and adjacent ecosystems as Glasswing findings go through coordinated disclosure.
  2. Your own patching cadence matters more than ever. If Mythos-class models can find 17-year-old bugs, attackers with less-advanced tooling can find 2-year-old bugs. Patch lag becomes a major risk factor.
  3. Get good at using Claude Opus 4.7 defensively. Opus 4.7 is not Mythos, but it's capable at code review and vulnerability analysis. The gap between "what you can do with the best publicly-available model" and "what Mythos can do" is narrower than you might think for most real-world codebases. My guide to Opus 4.7 for code work is here.
  4. Don't panic-buy "AI security tools." Most of the tools being marketed as "AI-powered security scanning" are GPT-4-class wrappers that can't approach Mythos capabilities. If a vendor claims otherwise, ask for the specific CVEs they've discovered that were not in any prior database.

The Open Question: Will Mythos-Class Capabilities Leak?

This is the uncomfortable part.

Anthropic's restricted-release approach works only if other labs don't ship equivalent capabilities to the public market. OpenAI, DeepMind, Meta, Mistral, Qwen, DeepSeek — any of them could announce a Mythos-equivalent in 2026 and ship it with loose or no restrictions.

The track record is mixed. OpenAI has historically been more permissive with release. DeepMind tends toward restricted release (Gemini Ultra took months). Meta's Llama is open-weight by design. Chinese labs like DeepSeek and Qwen open-source aggressively.

If even one frontier lab ships Mythos-class capabilities publicly in 2026, Project Glasswing becomes a brief head-start, not a structural advantage. The 3-5 month window Anthropic buys through Glasswing might be all the time we get.

Why Anthropic Did This

Two things stand out about the announcement:

First, the specificity. Anthropic didn't just say "our new model has dangerous cyber capabilities." They published the specific demo: CVE-2026-4747, FreeBSD, 17 years old, fully autonomous exploit. This is harder to dismiss than vague capability claims. It's also harder to undo — the cat is out of the bag that Mythos-class capabilities exist, even if the model weights stay locked down.

Second, the timing. April 7 was 13 days before Amazon's $25B investment announcement on April 20. The timeline suggests Anthropic wanted the Mythos announcement (which positions them as the safety-first lab) out of the way before the big funding headline (which might have looked purely commercial). Deliberate narrative sequencing.

What To Watch Next

Next 90 days:

  • First wave of Glasswing-sourced CVEs from Apple, AWS, Google, Microsoft
  • Open-source / competitor responses — does any other lab ship a Mythos equivalent in public?
  • Congressional / EU regulatory response. A model that finds zero-days in critical infra will attract regulator attention.

Next 12 months:

  • Will Anthropic expand the Glasswing partner list? (Likely yes, probably adding healthcare, telco, and one or two sovereign partners.)
  • Will Glasswing yield a published report of aggregate findings? (Probably yes by end of 2026 — a "here's what we found across the ecosystem" whitepaper is valuable marketing + industry signal.)
  • Will Opus 5 / Sonnet 5 inherit cyber capabilities from Mythos research? (Almost certainly yes, in dampened form.)

Honest Take

Mythos is the first public evidence we have that AI labs are running into capabilities where the answer to "should we release this?" is clearly no. Anthropic is being transparent about that decision. Whether the rest of the industry matches this pattern — or whether Mythos leaking / being replicated elsewhere forces the issue — is what determines if Glasswing was prescient or performative.

I'd bet prescient. The model exists. Similar models will exist at other labs in 6-18 months. Anthropic's restricted-release approach gives the world a head start on fixing the obvious vulnerabilities. That head start matters even if it doesn't last.

Related Reading

Sources

Questions about Mythos, Glasswing, or what to do differently in your security program? Reply on the newsletter — I answer every email.

Want the full research library?

120 tested Claude prompt codes with before/after output and token deltas.

See the Cheat Sheet — $15