← All posts

Something significant happened quietly in April 2026. Anthropic began testing a new AI model called Mythos — and by their own description, it represents "a step change in capabilities" beyond anything they have previously released.

Mythos has not been made publicly available. Instead, Anthropic launched Project Glasswing, a controlled initiative through which twelve partner organisations — including Amazon, Apple, Microsoft, Cisco, CrowdStrike and the Linux Foundation — are deploying Mythos specifically for defensive security work.

The reason for the caution is also the reason for the excitement: Mythos is extraordinarily capable.


What Makes Mythos Different

Mythos is a general-purpose frontier model with meaningful advances in reasoning and coding. But its headline capability is in cybersecurity — and the results are startling.

In testing, Mythos has already identified thousands of high-severity vulnerabilities across major operating systems and web browsers. In one case, it uncovered a 27-year-old bug in OpenBSD — a security-focused operating system that has been scrutinised by human experts for nearly three decades.

The implication is significant: AI has reached the point where it can find software vulnerabilities faster than human security teams can patch them. Anthropic's own research describes a model that surpasses all but the most skilled humans at finding and exploiting software weaknesses.

This is why Mythos has not been released publicly. The same capability that makes it a powerful defensive tool also makes it a serious offensive risk in the wrong hands. Anthropic is proceeding carefully — committing up to $100 million in usage credits to Project Glasswing partners while they study how to deploy this capability responsibly.


The Opportunity for Companies

Most businesses will not get direct access to Mythos in the near term. But the implications of its existence are significant for every organisation that runs software — which is to say, every organisation.

1. The security baseline is about to shift

If AI can find vulnerabilities this quickly, attackers will eventually have access to similar capabilities — whether through Mythos itself or models that follow. The organisations that will be best positioned are those that are already hardening their systems now, before that capability is widely available.

2. AI-powered security testing is becoming the standard

Human penetration testers are expensive, periodic and limited in scope. AI agents running continuously can scan systems around the clock, catching issues as they are introduced rather than weeks or months later. The early adopters of this approach will carry significantly less accumulated technical debt in their security posture.

3. The gap between secured and unsecured software is widening

The organisations involved in Project Glasswing — Amazon, Apple, Microsoft, the Linux Foundation — are going to emerge from this with substantially more hardened systems than those that are not. Over time, this creates a meaningful competitive gap in customer trust, regulatory compliance and incident frequency.

4. Mythos signals what general AI can now do

Beyond security, a model with Mythos-level reasoning and coding capability represents a step change in what AI agents can accomplish across software engineering broadly. The advances in code understanding that enable vulnerability detection also enable better code generation, architecture analysis and automated testing. Whatever Anthropic releases publicly next will carry these improvements.


What Should You Do Now?

You do not need access to Mythos to act on what its existence tells us.

Invest in security fundamentals. AI-assisted vulnerability detection will find what has been missed. Organisations with strong patch management, dependency hygiene and infrastructure-as-code practices will respond faster when issues are identified.

Start using AI for defensive security today. Tools available now — including Claude — can review code for common vulnerabilities, analyse dependencies, and help teams think through attack surfaces. This is not a replacement for a security programme, but it is a meaningful accelerant.

Watch the Project Glasswing outcomes. Anthropic has committed to publishing findings from this initiative. The vulnerabilities discovered in open-source software will be disclosed responsibly — and the lessons learned will shape how AI security tooling is designed and deployed going forward.

Think about trust as a competitive advantage. As AI-powered attacks become more capable, customers will increasingly choose vendors whose security posture they trust. Companies that move early on AI-assisted security will be better positioned to demonstrate that trust credibly.


The Bigger Picture

Mythos is a preview of where AI is heading. The fact that Anthropic chose to debut it in a controlled, defensive context — rather than rushing it to market — speaks well of their approach. But the underlying capability is real, and it will shape the security landscape for years.

The organisations that will thrive are not necessarily those with the most advanced AI. They are those that understand what AI changes about their risk environment and act accordingly — now, while there is still time to get ahead of it.


Want to understand how AI capabilities like Mythos affect your organisation's security or technology strategy? Talk to Reinvently.

← All posts