I guess this is the crux of the debate. All the claims are comparing models that are available freely with a model that is available only to limited customers (Mythos). The problem here is with the phrase "better model". Better how? Is it trained specifically on cybersecurity? Is it simply a large model with a higher token/thinking budget? Is it a better harness/scaffold? Is it simply a better prompt?
I don't doubt that some models are stronger that other models (a Gemini Pro or a Claude Opus has more parameters, higher context sizes and probably trained for longer and on more data than their smaller counterparts (Flash and Sonnet respectively).
Unless we know the exact experimental setup (which in this case is impossible because Mythos is completely closed off and not even accessible via API), all of this is hand wavy. Anthropic is definitely not going to reveal their setup because whether or not there is any secret sauce, there is more value to letting people's imaginations fly and the marketing machine work. Anthropic must be jumping with joy at all the free publicity they are getting.
In conclusion - Having a lot of tokens help! Having a better model also helps. Having both helps a lot. Having very intelligent humans + a lot of tokens + the best frontier models will help the most (emphasis on intelligent human).
The defender also not only has to discover issues but get them deployed. Installing patches takes time, and once the patch is available, the attacker can use it to reverse engineer the exploit and use it attack unpatched systems. This is happening in a matter of hours these days, and AI can accelerate this.
It is also entirely possible that the defender will never create patches or users will never deploy patches to systems because it is not economically viable. Things like cheap IoT sensors can have vulnerabilities that don't get addressed because there is no profit in spending the tokens to find and fix flaws. Even if they were fixed, users might not know about patches or care to take the time to deploy them because they don't see it worth their time.
Yes, there are many major systems that do have the resources to do reviews and fix problems and deploy patches. But there is an enormous installed base of code that is going to be vulnerable for a long time.
Adding the words “by Claude” to it doesn’t materially change it. One could also pay a few humans to do the same thing. People have done that for decades.
- what if at a certain level of capability you're essentially bug-free? I'm somewhat skeptical that this could be the case in a strong sense, because even if you formally prove certain properties, security often crucially depends on the threat model (e.g. side channel attacks, constant-time etc,) but maybe it becomes less of a problem in practice?
- what if past a certain capability threshold weaker models can substitute for stronger ones if you're willing to burn tokens? To make an example with coding, GPT-3 couldn't code at all, so I'd rather have X tokens with say, GPT 5.4, than 100X tokens with GPT-3. But would I rather have X tokens with GPT 5.4 or 100X tokens with GPT 5.2? That's a bit murkier and I could see that you could have some kind of indifference curve.
If anyone has access to the mythical Mythos we'll see the contact with reality.
With LLMs even the halting problem is just the question of paying for pro subscription!
This continous rush is not healthy. npm updates, replies to articles that barely made HN 12 hours ago, anything like that. It's not healthy.
Slow down.
Interestingly enough, I was thinking of writing an article about how cybersecurity (both access models and operational assumptions) can be modeled as a proof (NOT proof of work) system. By that I mean there is an abstract model with a set of assumptions (policies, identities, invariants, configurations and implementation constraints) from which authorization decisions are derived.
A model is secure if no unauthorized action is derivable.
A system is correct if its implementation conforms to the model's assumptions.
A security model can be analyzed operationally by how likely its assumptions are to hold in practice.
AISLE demonstrated in the last few weeks that small (weak per the author) models can find the openBSD bug (when pointed at the code). And apparently did several runs with the same results. Was gpt oss hallucinating on all those runs?
And what separates a strong model from a weak one? Is qwen3.5 27b weak?
Don't trust who says that weak models can find the OpenBSD SACK bug. I tried it myself. What happens is that weak models hallucinate (sometimes causally hitting a real problem) that there is a lack of validation of the start of the window (which is in theory harmless because of the start < end validation) and the integer overflow problem without understanding why they, if put together, create an issue. It's just pattern matching of bug classes on code that looks may have a problem, totally lacking the true ability to understand the issue and write an exploit. Test it yourself, GPT 120B OSS is cheap and available.
BTW, this is why with this bug, the stronger the model you pick (but not enough to discover the true bug), the less likely it is it will claim there is a bug. Stronger models hallucinate less, so they can't see the problem in any side of the spectrum: the hallucination side of small models, and the real understanding side of Mythos
tomato, tomato
1) Create fear via the pro-American Axel Springer press (politico). Use UK/EU competition to make the EU jealous:
https://www.politico.eu/article/anthropic-hacking-technology...
2) Hype up the thing via clueless publications like the Guardian:
https://www.theguardian.com/technology/2026/apr/17/finance-l...
"As you would expect, the engagement I have had from UK CEOs in the last week has been significant."
3) Sell the damm thing that finds 20 vulns in an NNTP over CORBA written in INTERCAL app to EU and UK companies.
None of the people involved in "dealing with the threat" have the slightest clue. UK/EU always falls for the latest US hype and CEOs pay up.
It's not proof of work, but proof of financial capacity.
The big companies are turning the access to high-quality token generators (through their service) into means of production. We're all going direct to Utopia, we're all going direct the other way.
So the bigger models hallucinate better causally hitting more real problems?
This is exactly the argument AI skeptics make btw. Also you say you tried GPT 120B OSS, that's like me proclaiming LLM coding doesn't work because I tried putting gpt 3.5 in Claude Code. Try it with GLM 5, Qwen, etc. Or improve your harness :)
its not just PoW at inference. It's PoW of inference + training.