Amodei does not mean that things are plateauing (i.e. the exponential will no longer hold), but rather uses "end" closer to the notion of "endgame," that is we are getting to the point where all benchmarks pegged to human ability will be saturated and the AI systems will be better than any human at any cognitive task.
Amodei lays this out here:
> [with regards to] the “country of geniuses in a data center”. My picture for that, if you made me guess, is one to two years, maybe one to three years. It’s really hard to tell. I have a strong view—99%, 95%—that all this will happen in 10 years. I think that’s just a super safe bet. I have a hunch—this is more like a 50/50 thing—that it’s going to be more like one to two [years], maybe more like one to three.
This is why Amodei opens with
> What has been the most surprising thing is the lack of public recognition of how close we are to the end of the exponential. To me, it is absolutely wild that you have people — within the bubble and outside the bubble — talking about the same tired, old hot-button political issues, when we are near the end of the exponential.
Whether you agree with him is of course a different matter altogether, but a clearer phrasing would probably be "We are near the endgame."
Unsurprisingly, we were able to build a demo platform within a few days. But when we started building the actual platform, we realized that the code generated by Claude is hard to extend, and a lot of replanning and reworking needs to be done every time you try to add a major feature.
This brought our confidence level down. We still want to believe that Claude will help in generating code. But I no longer believe that Claude will be able to write complex software on its own.
Now we are treating Claude as a junior person on the team and give it well-defined, specific tasks to complete.
But if you’ve read David Deutsch’s The Beginning of Infinity, Amodei’s view looks like a mistake. Knowledge creation is unbounded. Solving diseases/coding shouldn't result in a plateau, but rather unlock totally new, "better" problems we can't even conceive of yet.
It's the begining of Inifinity, no end in sight!
A large language model like GPT runs in what you’d call a forward pass. You give it tokens, it pushes them through a giant neural network once, and it predicts the next token. No weights change. Just matrix multiplications and nonlinearities. So at inference time, it does not “learn” in the training sense
we need some kind of new architecture to get to next gen wow stuff e.g differentiable memory systems. ie instead of modifying weights, the model writes to a structured memory that is itself part of the computation graph. More dynamic or modular architectures not bigger scalling and spending all our money on data centers
anybody in the ML community have an answer for this? (besides better RL and RHLF and World Models)
IMHO this is really silly: we already know that IQ is useful as a metric in the 0 to about 130 range. For any value above the delta fails to provide predictive power on real-world metrics. Just this simple fact makes the verbiage here moot. Also let's consider the wattage involved...
Quoting the Anthropic safety guy who just exited, making a bizarre and financially detrimental move: "the world is in peril" (https://www.forbes.com/sites/conormurray/2026/02/09/anthropi...)
There are people in the AI industry who are urgently warning you. Myself and my colleagues, for example: https://www.theregister.com/2026/01/11/industry_insiders_see...
Regulation will not stop this. It's time to build and deploy weapons if you want your species to survive. See earlier discussion here: https://news.ycombinator.com/item?id=46964545
https://www.julian.ac/blog/2025/09/27/failing-to-understand-...
Even in a world where the software is 100% written by AI in 1 millisecond by a country of geniuses in a data center, humans still need to have their hands firmly on the wheel if they won’t want to risk their businesses well being. That means taking the time to understand what the AI put together. That will be the bottleneck regardless of how fast and smart AI is. Because unless the CEO wants to be held accountable for what the AI builds and deploys, humans will need to be there to take the responsibility for its output.
Anthropic is doing good work but he's personally responsible for a good deal of the Irrational Exuberance that plagues the space
The end of the exponential means the start of other models.
Yet news and opinions from that world somehow seep through into my reality...
> 100% of today’s SWE tasks are done by the models.
Maybe that’s why the software is so shitty nowadays.
Every time I read something from Dario, it seems like he is grifting normies and other midwits with his "OHHH MY GOD CLAUDE WAS KILLING TO KILL SOMEONE! MY GOD IT WANTS TO BREAK OUT!" Then they have all their Claude constitution bullshit and other nonsense to fool idiots. Yeah bro the model with static weights is truly going to take over.
He knows what he is doing, it's all marketing and they have put shit ton of money into it if you have been following the media for the last few months.
Btw, it wasn't many months ago that this guy was hawking doubling of human life span at a group of some boomer investors. Oh yeah I wonder why he decided to bring it up there? Maybe because the audience is old and desperate and that scammers play on this weaknesses.
Truly of one of the more obnoxious people in the AI space and frankly by extension Anthropic is scammy too. I rather pay Altman than give these guys a penny and that says a lot.
Citation needed please.
Nobody. Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.
> 100% of today’s SWE tasks are done by the models.
Thank God, maybe I can go lie in the sun then instead of having to solve everyone's problems with ancient tech that I wonder why humanity is even still using.
Oh, no? I'm still untying corporate Gordian knots?
> There is no reason why a developer at a large enterprise should not be adopting Claude Code as quickly as an individual developer or developer at a startup.
My company tried this, then quickly stopped: $$$
Oh good, hopefully it'll model itself after an exponential rise in any sort of animal populations and collapse on itself because it can no longer be sustained! Isn't that how things go in exponential systems with resource constraints? We can only hope that will be the best outcome. That would be wonderful.