by goldenarm
7 subcomments
- When your logo is AI, your illustrations are AI, and you profile pic is AI, I'm going to assume the text is AI too and won't read it.
by saithound
13 subcomments
- It's pretty clear at this point that Mythos' capability to discover and exploit zero-day vulnerabilities at scale is but an incremental improvement over existing models like the ones available to OpenAI's Plus/Pro subscribers.
Anthropic tries to create marketing hype around Mythos using two psychological tricks.
1. Put large numbers in the headlines.
"Mythos discovered 271 vulnerabilities in Firefox" makes the model seem extremely capable to the uninitiated.
But it's actually meaningless as a measure of capability _improvement_.
Anthropic gave away $100mil specifically as Mythos credits to these projects and companies (that's $2.5mil per project). Spending the same exorbitant amount of compute analyzing the same codebases in an older model like GPT 5.x Pro would have turned up 260 of these vulnerabilities, or could even have turned up more than 271 ones.
No need to speculate, since this is exactly what we saw in the few code bases where we have such comparisons (like in the curl codebase). Supposedly weaker models, working with a much lower budget, turned up dozens of vulnerabilities. Mythos turned up only one, which ended up as a low severity CVE.
2. Do the whole "too dangerous to release" shtick. This is one of Dario Amodei's favorite moves. When he was vice president of research at OpenAI, he declared GPT-3 (which wasn't able to produce coherent text beyond 3-4 sentences at the time) too dangerous [1] as well.
Long story short, it's the ChatGPT 4.5 situation again: a company trained a model that's too slow and expensive, but not much more capable than what came before. It therefore requires these marketing stunts.
[1] https://www.itpro.com/technology/artificial-intelligence-ai/...
- > Resource Limit Is Reached The website is temporarily unable to service your request as it exceeded resource limit. Please try again later.
I guess it was too dangerous to even read the article
by wood_spirit
1 subcomments
- My thinking is that if it really was super duper then Anthropic could charge eye watering amounts and have willing customers and set up expectations going forward that SOTA costs a lot to use.
That they don’t suggests that really it is only incrementally better than Opus 4.7 and that the market won’t bear a price increase that makes it economical to serve let alone profit from serving.
So the cynical me imagines execs sitting around the table and worrying that releasing it at anywhere close to break even would risk actually hurting the brand instead of setting them up as a premium company, and this at a time just before ipo when they can ill afford that rumour.
So they wonder what to do, and think playing national security card is the obvious way out. It’s incrementally better enough to find bugs that previous sota missed, it doesn’t get used widely so it’s cheap to serve and they get the good publicity without the economic scrutiny?
Making a loss selling to a small number of users using it in a limited way is entirely affordable. Making a loss selling it at scale is correspondingly unaffordable?
- (I work at Anthropic) We have publicly stated[1] that our goal is to deploy Mythos-class models at scale when we have the requisite safeguards for offensive cyber risks in place. Mythos is a general frontier model, not a cyber-specific model so there are many reasons why we think our users will benefit from access (with the aforementioned safeguards in place) in due course. Compute has also not factored into our decision[2] to rollout the model in a limited fashion to defenders. We'll be sharing more soon on the first month or so of the project and rollout.
[1] https://www.anthropic.com/glasswing#:~:text=deploy%20Mythos%...
[2] https://x.com/logangraham/status/2054613618168082935
- Article does not mention the other reason: in the interview with Dwarkesh, Amodei remarked about how other organizations are copying or training off Opus for their models.
By delaying allowing others to train off Mythos, they hold their SWE-Bench Pro head start longer so among other things, the USG can't but notice Anthropic's lead when they're deliberating on whether to further substantiate the "supply chain risk".
- My posts* got to the first spot on hackernews couple of times. Never once it broke down like that. And why would it, it's just a bunch of html and css files served through (free) vercel (don't think it matters). I wonder what do people run their blogs these days, so they fail under the pressure so easily.
* https://news.ycombinator.com/from?site=yanist.com
by tomaytotomato
1 subcomments
- AI has always been dangerous, but not existentially dangerous.
Mythos is dangerous but it's not going to Skynet us.
Just the same as the military drone using some sort of OpenCV library and target prioritisation loop isn't going to turn evil on us.
by waynecochran
1 subcomments
- Conclusion: both are true which makes sense. The KV cache scaling yields both the emergent power and requires the enormous capacity.
by irthomasthomas
0 subcomment
- I don't believe anything out of these startups anymore unless its backed by evidence.
Too expensive? Why would anthropic train a model too expensive to run? I doubt they would. Let's look at the evidence: Opus 4.5 came in at double the speed and half the price of old opus. Its speed matched older sonnet models. Higher Speed + Lower price = smaller model. So they rebranded sonnet sized models to opus. Where is the og opus sized model?
- Whatever the reason for "hiding" Mythos, it seems clear that these systems are getting very good at finding software security exploits. Mythos has made more people, even the US government, sit up and pay more attention. Regarding who should control the release of powerful systems like this, as Bruce Schneier and David Lie write in "Mythos and Cybersecurity" :
"Until that changes, each Mythos-class release will put the world at the edge of another precipice, without any visibility into whether there is a landing out of view just below, or whether this time the drop will be fatal. That is not a choice a for-profit corporation should be allowed to make in a democratic society. Nor should such a company be able to restrict the ability of society to make choices about its own security."
https://www.schneier.com/blog/archives/2026/04/mythos-and-cy...
It is reasonable to be concerned.
- It’s obvious that this is a campaign to pump their pending ipo. It may be too expensive, but it’s all about the ipo in my opinion.
by ed_elliott_asc
3 subcomments
- It all sounds a bit too marketing-ey to me “we have this amazing model that is too good to release” but the goal is still AGI? Ok right.
by holysoles
1 subcomments
- The thought of this didn't even cross my mind until yesterday. I previously figured the hype was primarily around marketing, but after watching this Primagen video, I have the same suspicion.
https://www.youtube.com/watch?v=zaGOKd4jqEk
by jstummbillig
0 subcomment
- This makes it sound like some kind of open question or even mystery.
Amodei himself stated quite clearly in recent interviews that they simply can't satisfy all demand, compute wise. Of course, Mythos could get more of the already too small pie, but clearly it's a more resource intensive model and would further increase strain.
- It's probably a little of both: dangerous and expensive. This article makes a good case that the cost is at least part of the reason.
I wish the article could have been a lot tighter and shorter. This is not earth shattering information that requires a New Yorker length piece of investigative journalism.
- Reminds me of the paper launches NVidia/Intel/AMD sometimes do where they announce some amazing tech (such as the old Titan GPUs) that placed their hardware at the top of the benchmarks, but with basically zero actual stock available.
- Opus Fast Mode is 30$/150$/M Input/Output cost.
Mythos's pricing (from model card) is 25$/125$ Input/Output cost.
Based on this I doubt that Mythos pro is too dangerous to release or provides significantly more value.
- The real Mythos was the friends we made along the way.
- For marketing purposes it is always too dangerous, not saying it is safe
- I found this an illuminating piece, though I don't think percentages needed to be assigned between "is it about cost" vs "is it about security"
- You don't have to look much further than marketing...
by marginalia_nu
0 subcomment
- Jesus has microwaved a burrito so hot he can not eat it, refuses to show the world, citing dangerous omnipotence paradox.
- This lengthy article by a self-described "AI enthusiast" muddies the waters. Yes, Anthropic has capacity constraints, which is why they rented Colossus from Musk despite the danger of being distilled.
The real reason is that the hype around Mythos has already gone quiet because it does not find more than other models. That is, nothing at all in most open source projects. If you hide the model, embarrassing statistics will not be posted.
- Mythos had to silence you apparently
- Silenced immediately.
by waffletower
0 subcomment
- I am very concerned, particularly if Anthropic and/or other frontier model producers begin to hit an inference performance ceiling, that Anthropic will use its safety scare tactics to lobby for the marginalization of the open weight model ecosystem. As the open models catch up, or become "good enough", they may amplify their open model hostility "to protect their moat". I see Mythos and Glasswing as cynical beginnings of this. Also note that Google, Meta, and even OpenAI have released open weight models and have nodded to the obvious research benefits they provide, whereas the Anthropic "Public Benefit Corporation" has done no such public benefit. The valuation and success of Anthropic coupled with its "trust us and no one else" culture may be dangerous for the legal survival of open weight models.
- I think it's plausible that a substantial fraction of the increase in cyber attacks we saw recently was caused by GPT-5.5. So the "too dangerous" framing is plausible, even if the more important reason is a lack of RAM (as the article author suspects) or compute to serve Claude Mythos. We already know from other events that OpenAI is far less interested in AI safety and ethics than Anthropic.
- I've always wondered: what if China were deliberately using AI to search for vulnerabilities in critical government servers, for example in the EU.
by lenerdenator
0 subcomment
- I'd be tempted to offer this as a consultant service were I at Anthropic.
It feels like an AI tool that needs professionals to interface with it. Get some of those professionals, have them work with clients in a targeted way. It helps reduce the exposure the tool has to bad actors, and reduces the amount of resource usage that it will incur, because it's being used only by trained individuals.
Use what you learn from the experience to further refine its operation and make it less expensive to operate.
by micromacrofoot
0 subcomment
- It's probably not much more dangerous than all the AI security patching being done without it, CVE rate is approaching a straight line up
- My guess is they are still in the "fake it till you make it" phase. There's no Mythos, it's just a hype machine fueled by a hot air.
by bethekidyouwant
0 subcomment
- too dangerous to release say engineers in only sector where this regularly happens.
by hiroto_lemon
0 subcomment
- [flagged]
by paol_taja
4 subcomments
- The "too dangerous to release" line was definitely a marketing stunt.
OpenAI already used the same playbook with GPT-2 in 2019, and some of the same people involved back then are now doing it again at Anthropic with Mythos.
Same safety-branding DNA, different company, and people are falling for it again.
by manincharge
0 subcomment
- [dead]
- [flagged]