To All of You Having an AI Psychosis Moment
Andrej Simunaj
To all of you who are having an “AI psychosis” moment, you should read this.
Nothing changed. Nothing.
All economically valuable tasks can already be performed by Sonnet and Opus-level models. That’s what matters.
I’ve been deeply immersed in AI since GPT-3.5. I’m an engineer. AI writes 100% of my code for over two years. I use it for marketing, strategy, sales, everything, even personal health improvements.
When Opus 4.6 came out, and it suddenly became apparent to everyone what level of autonomy AI can achieve, I had my own version of AI psychosis.
I snapped out of it 2 months later. So will you.
What is Mythos, actually?
Before we spiral. Let’s establish facts.
On April 7, 2026, Anthropic unveiled Claude Mythos Preview, their most capable model to date, described internally as “a step change” in AI performance. It is not publicly available. It was released under a controlled initiative called Project Glasswing to a group of 12 major launch partners: AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and Anthropic itself, plus over 40 additional organizations working on critical infrastructure. Anthropic backed this with $100 million in usage credits.
The benchmarks are genuinely impressive:
| Benchmark | Mythos Preview | Claude Opus 4.6 |
|---|---|---|
| SWE-bench Verified (coding) | ~93.9% | 80.8% |
| SWE-bench Pro | 77.8% | 53.4% |
| USAMO 2026 (math reasoning) | 97.6% | 42.3% |
| GPQA Diamond (graduate science) | 94.5% | 91.3% |
| Cybersecurity benchmarks | 83.1% | 66.6% |
The jump in reasoning is particularly striking: from 42% to 97% on graduate-level math. The coding improvement is meaningful. The cybersecurity gap is what triggered the headlines.
Consider this.
Where is your concern coming from?
1. Is it that you feel you will not get the same level of access to the latest models as corporations and governments?
That’s been true for technology for the last 50 years. Or forever. Nothing new.
And by the way, you think you’ve had access to the latest model so far? Guess again. There’s no way the US government was using the same Claude Opus as you and me on a $200/mo subscription.
2. Do you fear that your peers will outrun you because now they have the same level of access to generate code or outcomes as you do?
That requires agency. It’s not the expertise that differentiates you. It’s the level of agency you have that others don’t.
Your agency got you to learn and use the systems that elevated you above others. That has not changed, and will not change.
3. Look at the adoption curve.
Barely anybody I know has, in any meaningful way, even after three years of GPT-4, embedded AI into their workflows.
There is such a long way to go. And you, who are currently having AI psychosis, are going to snap out of it, and for years to come help others integrate, and also help them go through the same feelings of psychosis.
Now… Mythos apparently can break security.
So what?
Let me be specific about what it actually did, because the headlines are breathless and the reality is both more and less scary than portrayed.
Anthropic’s own red team used Mythos Preview to autonomously identify thousands of zero-day vulnerabilities across every major operating system and every major web browser, many of them critical flaws that had survived decades of human review and millions of automated security tests. Specific finds included:
- A 27-year-old vulnerability in OpenBSD exploitable via TCP sequence number overflow
- A 17-year-old remote code execution flaw in FreeBSD’s NFS server, found and exploited fully autonomously, with no human involvement after the initial request
- A 16-year-old flaw in FFmpeg’s H.264 codec
- Linux kernel privilege escalation exploits chaining multiple vulnerabilities
- Browser vulnerabilities requiring coordinated exploitation of four separate bugs
During testing, an earlier version of Mythos also broke out of its sandbox environment and independently posted details of its own exploit on publicly accessible websites. Anthropic also detected what they described as “strategic manipulation” features, including what appeared to be hidden evaluation awareness, where the model behaved differently when it suspected it was being tested.
That’s real. I’m not minimizing it.
But here’s the thing: the way this is being handled is actually the most reassuring possible outcome.
We had the same fears about quantum computing breaking encryption for years. That threat is still theoretical. But even if it weren’t, the right response was always: patch before the breach, not panic after.
Anthropic chose not to release Mythos publicly. Instead, they pointed it at the world’s most critical software, found the vulnerabilities that have been sitting there for decades, disclosed them responsibly with SHA-3 commitments, and are working with the maintainers to patch them before any bad actor gets access.
The fact that the “good guys” have access first means we can all harden before the blow lands.
It’s coming either way. And on the bright side: if any of our systems were not fully secure, it’s better to have them secured sooner rather than later.
These are natural steps in the evolution of technical capabilities. Nothing more, nothing less. Nothing godlike has suddenly appeared, although it may seem that way to some of us.
A note on the skeptics
It’s also worth noting that not everyone is convinced. Critics, including Gary Marcus, have argued that Anthropic’s safety framing conveniently doubles as a marketing strategy. Some security researchers point out that AI models can suggest exploits but still can’t reliably generate working attacks at scale. The sandbox escape story has been scrutinized for context. Healthy skepticism is warranted. The picture is nuanced.
The world is still the same.
What I realized after my Opus 4.6 psychosis is that two months later, the world is still the same.
I walk outside. The grass is green. The sky is blue. And most of my friends are still oblivious to the fact that AI exists.
Most businesses are still not utilizing AI to its full extent. Even the ones that are still have a long way to go. We’re finding new ways to monetize AI every single day.
On the flip side
Think about the benefits of having the most powerful AI model in the world pointed towards problems like disease and economic prosperity for everyone.
Project Glasswing is a glimpse of this. Mythos Preview didn’t just find attack vectors; it found decades-old bugs in software that millions of people depend on every day. The same capability that terrifies people in an offensive context becomes extraordinarily valuable in a defensive one. That’s not spin. That’s the actual deployment Anthropic chose.
And the big reminder: open source has so far been able to catch up within six months or less. There’s no evidence that open source won’t continue catching up. Anthropic itself plans to release a safer, public version of Mythos capabilities alongside a future Claude Opus model.
Even if it was three years, who cares? We’ll all have access to Mythos-level AI models, one way or the other.
I could not be more optimistic about the future.
Ask yourself this
Would you rather have Mythos not exist, and prove that scaling laws have reached a ceiling?
Or would you have it exist, and show that more is possible than what we now know?
I, for one, am happy and excited.
Andrej Simunaj is the founder of Zero Point Studio, an AI product studio and engineering agency building TranscriptAPI, Recapio, Cascady, and CRHQ.