Horses: AI progress is steady. Human equivalence is sudden
by maciejzj
30 subcomments
- I may have developed some kind of paranoia reading HN recently, but the AI atmosphere is absolutely nuts to me. Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population? And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective? And barely anyone shares a thought like "technology should be warranted by the populace, not the other way around?". And the guy writing this works at Anthropic? The very guy who makes this thing happen, but is only able to conclude this with "I very much hope we'll get the two decades that horses did". What the hell.
- Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:
1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.
by billisonline
5 subcomments
- An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term "AGI." But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.
I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.
- People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other.
The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.
If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.
by richardles
4 subcomments
- I've also noticed that LLMs are really good at speeding up onboarding. New hires basically have a friendly, never tired mentor available. It gives them more confidence in the first drafted code changes / design docs. But I don't think the horse analogy works.
It's really changing cultural expectations. Don't ping a human when an LLM can answer the question probably better and faster. Do ping a human for meaningful questions related to product directions / historical context.
What LLMs are killing is:
- noisy Slacks with junior folks questions. Those are now your Gemini / chat gpt sessions.
- tedious implementation sessions.
The vast majority of the work is still human led from what I can tell.
by d4rkn0d3z
4 subcomments
- It might be better to think about what a horse is to a human, mostly a horse is an energy slave. The history of humanity is a story about how many energy slaves are available to the average human.
In times past, the only people on earth who had their standard of living raised to a level that allowed them to cast there gaze upon the stars were the Kings and there courts, vassals, and noblemen. As time passed we have learned to make technologies that provide enough energy slaves to the common man that everyone lives a life that a king would have envied in times past.
So the question arises as to whether AI or the pursuit of AGI provides more or less energy slaves to the common man?
- Software engineers used to know that measuring lines of code written was a poor metric for productivity...
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
- Cost per word is a bizarre metric to bring up. Since when is volume of words a measure of value or achievement?
by socketcluster
7 subcomments
- I think my software engineering job will be safe so long as big companies keep using average code as their training set. This is because the average developer creates unnecessary complexity which creates more work for me.
The way the average dev structures their code requires like 10x the number of lines as I do and at least 10x the amount of time to maintain... The interest on technical debt compounds like interest on normal debt.
Whenever I join a new project, within 6 months, I control/maintain all the core modules of the system and everything ends up hooked up to my config files, running according to the architecture I designed. Happened at multiple companies. The code looks for the shortest path to production and creates a moat around engineers who can make their team members' jobs easier.
IMO, it's not so different to how entrepreneurship works. But with code and processes instead of money and people as your moat. I think once AI can replace top software engineers, it will be able to replace top entrepreneurs. Scary combination. We'll probably have different things to worry about then.
by 1970-01-01
4 subcomments
- How about we stop trying the analogy clothing on and just tell it like it is? AI is unlike any other technology to date. Just like predicting the weather, we don't know what it will be like in 20 months. Everything is a guesstimate.
- This is a fun piece... but what killed off the horses wasn't steady incremental progress in steam engine efficiency, it was the invention of the internal combustion engine.
by COAGULOPATH
2 subcomments
- > In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
But would you rather be a horse in 1920 or 2020? Wouldn't you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?
The only way horses conceivably have it worse is that there are fewer of them (a kind of "repugnant conclusion")...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.
- Engine efficiency, chess rating, AI cap ex. One example is not like the other. Is there steady progress in AI? To me it feels like it’s little progress followed by the occasional breakthrough but I might be totally off here.
- > Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
> Then in December, Claude finally got good enough to answer some of those questions for us.
What getting high on your own supply actually looks like. These are not the types of questions most people have or need answered. It's unique to the hiring process and the nascent status of the technology. It seems insane to stretch this logic to literally any other arena.
On top of that horses were initially replaced with _stationary_ gasoline engines. Horses:Cars is an invalid view into the historical scenario.
- Person whose job it is to sell AI selling AI is what I got from this post.
by personjerry
1 subcomments
- I think it's a cool perspective, but the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative.
And that really is the entire question at this point: Which domains will AI win in by a sufficient margin to be worth it?
by burroisolator
3 subcomments
- "In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
And not very long after, 93 per cent of those horses had disappeared.
I very much hope we'll get the two decades that horses did."
I'm reminded of the idiom "be careful what you wish for, as you might just get it." Rapid technogical change has historically lead to prosperity over the long term but not in the short term. My fear is that the pace of change this time around is so rapid that the short term destruction will not be something that can be recovered from even over the longer term.
- Someone who makes horseshoes then learns how to make carburetors, because the demand is 10x.
https://en.wikipedia.org/wiki/Jevons_paradox
- The 1220s horse bubble was a wild time. People walked everywhere all slow and then BAM guys on horses shooting arrows at you.
AI is like that, but instead with dudes in slim fitting vests blogging about alignment
- This is food for thought, but horses were a commodity; people are very much not interchangeable with each other. The BLS tracks ~1,000 different occupations. Each will fall to AI at a slightly different rate, and within each, there will be variations as well. But this doesn't mean it won't still subjectively happen "fast".
- > Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
> Then in December, Claude finally got good enough to answer some of those questions for us.
> … Six months later, 80% of the questions I'd been being asked had disappeared.
Interesting implications for how to train juniors in a remote company, or in general:
> We find that sitting
near teammates increases coding feedback by 18.3% and improves code quality. Gains
are concentrated among less-tenured and younger employees, who are building human capital. However, there is a tradeoff: experienced engineers write less code when
sitting near colleagues.
https://pallais.scholars.harvard.edu/sites/g/files/omnuum592...
- To stay within the engine analogy.
We have engines that are more powerful than horses, but
1. we aren’t good at building cars yet,
2. they break down so often that using horses often still ends up faster,
3. we have dirt tracks and feed stations for horses but have few paved roads and are not producing enough gasoline.
by sothatsit
1 subcomments
- This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:
1. The release of Claude Code in February
2. The release of Opus 4.5 two weeks ago
In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.
Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.
by sceptic123
2 subcomments
- > A system that costs less, per word thought or written, than it'd cost to hire the cheapest human labor on the face of the planet.
Is it really possible to make this claim given the vast sums of money that have gone in to AI/LLM training?
- Aren't you guys looking forward to the day when we get the opportunity to go the way of all those horses? You should! I'm optimistic; I think I'd make a fine pot of glue.
AI, faster please!
- Regarding horses vs. engines, what changed the game was not engine efficiency, but the widespread availability of fuel (gas stations) and the broad diffusion of reliable, cheap cars. Analogies can be made to technologies like cell phones, MP3 players, or electric cars: beyond just the quality of the core technology, what matters is a) the existence of supporting infrastructure and b) a watershed level of "good/cheap enough" where it displaces the previous best option.
- People back then were primarily improving engines, not making articles about engines being better than horses. That's why it's different now.
by anshulbhide
0 subcomment
- Yet, this applies for only three industries so far - coding, marketing and customer support.
I don't think applies for general human intelligence - yet.
- What is this horseshit.
What exactly does specifically engine efficiency have to do with horse usage? Cars like the Ford Model T entered mass production somewhere around 1908. Oh, and would you look at the horse usage graph around that date! sigh
The chess ranking graph seems to be just a linear relationship?
> This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.
>
> Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did.
So more == better. sigh. Ran any, you know, studies to see the quality of those answers? I too can consult /dev/random for answers at a rate of gigabytes per second!
> I was one of the first researchers hired at Anthropic.
Yeah. I can tell. Somebody's high on their own supply here.
by websiteapi
11 subcomments
- funny how we have all of this progress yet things that actually matter (sorry chess fans) in the real world are more expensive: health care, housing, cars. and what meager gains there are seem to be more and more concentrated in a smaller group of people.
plenty of charts you can look at - net productivity by virtually any metric vs real adjusted income. the example I like are kiosks and self checkout. who has encountered one at a place where it is cheaper than its main rival and is directly attributable to (by the company or otherwise) to lower prices?? in my view all it did was remove some jobs. that's the preview. that's it. you will lose jobs and you will pay more. congrats.
even with year 2020 tech you could automate most work that needs to be done, if our industry wouldn't endlessly keep disrupting itself and have a little bit of discipline.
so once ai destroys desk jobs and the creative jobs, then what? chill out? too bad anyone who has a house won't let more be built.
by bad_username
0 subcomment
- AI currently lacks the following to really gain a "G" and reliably be able to replace humans at scale:
- Radical massive multimodality. We perceive the world through many wide-band high-def channels of information. Computer perception is nowhere near. Same for ability to "mutate" the physical world, not just "read" it.
- Being able to be fine-tuned constantly (learn things, remember things) without "collapsing". Generally having a smooth transition between the context window and the weights, rather than fundamental irreconcilable difference.
These are very difficult problems. But I agree with the author that the engine is in the works and the horses should stay vigilant.
- The work done by horses was not the only work out there. Games played by chess masters was not the only sport on the planet. Answering questions and generating content is not the only work that happens at work places.
- This makes me think of another domain where it could happen: electricity generation and distribution. If solar+battery becomes cheap enough we could see the demise of the country-scale grid.
- > In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
I really doubt horses would be ambivalent about this, let alone about anything. Or maybe I'm wrong, they were in two minds: oh dear I'm at risk of being put to sleep, or maybe it could lead to a nice long retirement out on a grassy meadow. But they're in all likelihood blissfully unaware.
by chairmansteve
0 subcomment
- 4000 questions a month from new hireds. How many of those were repeated many times. A lot. So if they'd built a wiki?
I am not an AI sceptic.. I use it for coding. But this article is not compelling.
- Wow! That is highly unscientific and speculative. Wow!
by globular-toast
1 subcomments
- And what happened to human population? It skyrocketed. So humans are going to get replaced by AI and human population will skyrocket again? This analogy doesn't work.
- my favorite part was where the graphs are all unrelated to each other
- I think the author's point is that each type of job will basically disappear roughly at once, shortly after AI crosses the bar of "good enough" in that particular field.
by haritha-j
1 subcomments
- Horses pull carts. Chessbots play chess. Humans do lots of things. Equivalence in one thing is not equivalence in the vast collection of things we do.
by cuttothechase
0 subcomment
- >>This was a five-minute lightning talk given over the summer of 2025 to round out a small workshop.
Glad I noticed that footnote.
Article reeks of false equivalences and incorrect transitive dependencies.
- Maybe I can get a job programming for the Amish.
by blondie9x
3 subcomments
- This post is kind of sad. It feels like he's advocating for human depopulation since the trajectory aligns with horse populations declining by 93% also.
by HeavyStorm
0 subcomment
- Ripping off Yuval in big style.
- We still have chess grandmasters if you have noticed..
by WhyOhWhyQ
1 subcomments
- Humans design the world to our benefit, horses do not.
- Conclusion: Soylent..?
by mrtesthah
1 subcomments
- LLMs can only hallucinate and cannot reason or provide answers outside of their training set distribution. The architecture needs to fundamentally change in order to reach human equivalence, no matter how many benchmarks they appear to hit.
by johnsmith1840
1 subcomments
- I mean it's hard to argue that if we invented a human in a box (AGI) human work would be irrelevent. But I don't know how we could watch current AI and anyone can say we have that.
The big thing this AI boom has showed us that we can all be thankful to have seen is what a human in a box will eventually look like. The first generation of humans to be able to see that is a super lucky experience to have.
Maybe it's one massive breakthrough away or maybe it's dozens away. But there is no way to predict when some massive breakthrough will occur Illya said 5-20 that really just means we don't know.
by moralestapia
0 subcomment
- Great post.
This is the context wherein the valuation of AI companies makes sense, particularly those that already got a head start and have captured a large swath of that market.
by florilegiumson
2 subcomments
- If AI is really likely to cause a mass extinction event, then non-proliferation becomes critical as it was in the case with nuclear weapons. Otherwise, what does it really mean for AI to "replace people" outside of people needing to retool or socially awkward people having to learn to talk to people better? AI surely will change a lot, but I don't understand the steps needed to get to the highly existential threat that has become a cliché in every "Learn CLAUDE/MCP" ad I see. A period of serious unemployment, sure, but this article is talking about population collapse, as if we are all only being kept alive and fed to increase shareholder value for people several orders of magnitude more intelligent than us, and with more opposable thumbs. Do people think 1.2B people are going to die because of AI? What is the economy but people?
by dealflowengine
0 subcomment
- Everyone is missing the real valuable point here: we never needed 90+% of horses in the first place.
- Point taken, but it's hard to take a talk seriously when it has a graph showing AI becoming 80% of GDP! What does the "P" even stand for then?
- Ironically, you could use the sigmoid function instead of horses. The training stimulus slowly builds over multiple iteration and then suddenly, flip: the wrong prediction reverses.
- It’s not like humans are standing still. Humans are still improving faster than ai.
by eigencoder
0 subcomment
- I'm confused. Isn't the sharp decline in the graph due to the population boom?
by cryptonector
0 subcomment
- > 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
Ambivalent??
- Wait till the robots arrive. That they will know how to do a vast range of human skills, some that people train their whole lives for, will surprise people the most. The future shock I get from Claude Code, knowing how long stuff takes the hard way, especially niche difficult to research topics like the alternate applicable designs of deep learning models to a modeling task, is a thing of wonder. Imagine now that a master marble carver shows up at an exhibition and some sci-fi author just had robots make a perfect beautiful equivalent of a character from his novel, equivalent in quality to Michaelangelo's David, but cyberpunk.
- I think AI is probably closer to jet engines than it is to horses.
- Cool, now lets make a big list of technologies that didn't take off like they were expected to
- Horses never figured out how to get government bailouts.
- Terrible comparison.
Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.
Now LLMs, what is their purpose? What is the purpose of a human?
I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).
They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.
- > 90% of the horses in the US disappeared
Where did they go?
- > And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
This "our company is onto the discovery that will put you all out of work (or kill you?)" rhetoric makes me angry.
Something this powerful and disruptive (if it is such) doesn't need to be owned or controlled by a handful of companies. It makes me hope the Chinese and their open source models ultimately win.
I've seen Anthropic and OpenAI employees leaning into this rhetoric on an almost daily basis since 2023. Less so OpenAI lately, but you see it all the time from these folks. Even the top leadership.
Meanwhile Google, apart from perhaps Kilpatrick, is just silent.
- if ai takes my job, good riddance
by LanceWinslow
0 subcomment
- You know, this whole conversation reminds me of that old critique on Communism; Once the government becomes so large and encompassing, it reaches a point where it no longer needs to the people to exist, thus, people are culled by the millions, as they are simply no longer needed.
by cryptonector
0 subcomment
- This is another one of those apocalyptic posts about AI. It might actually be true. I recommend reading The Phools, by Stanislav Lem -- it's a very short story, and you can find free copies of it online.
Also maybe go out for some fresh air. Maybe knowledge work will go down for humans, but plumbing and such will take much longer since we'll need dextrous robots.
by john-radio
0 subcomment
- I've never visited this blog before but I really enjoy the synthesis of programming skill (at least enough skill to render quick graphs and serve them via a web blog) and writing skill here. It kind of reminds me of the way xkcd likes to drive home his ideas. For example, "Surpassed by a system that costs one thousand times less than I do... less, per word thought or written, than ... the cheapest human labor" could just be a throwaway thought, and wouldn't serve very well on its own, unsupported, in a serious essay, and of course the graph that accompanies that thought in Jones's post here is probably 99.9% napkin math / AI output, but I do feel like it adds to the argument without distracting from it.
(A parenthetical comment explaining where he ballparked the measurements for himself, the "cheapest human labor," and Claude numbers would also have supported the argument, and some writers, especially web-focused nerd-type writers like Scott Alexander, are very good at this, but text explanations, even in parentheses, have a way of distracting readers from your main point. I only feel comfortable writing one now because my main point is completed.)
- "I was one of the first researchers hired at Anthropic."
The article is a Misanthropic advertisement. The "AI" mafia feels that no one wants their products and doubles down.
They are so desperate that Pichai is now talking about data centers in space on Fox News.
Next up are "AI" space lasers.
by skywhopper
0 subcomment
- Truly depressing to see blasé predictions of AI infra spending approaching WW2 levels of GDP as if that were remotely desirable. One, that’s never going to happen, but if it does, it’ll mean a complete failure to address actual human needs. The amount of money wasted by Facebook on the Metaverse could have ended homelessness in the US, or provided universal college. Now here we are watching multiple times that much money get thrown by Meta, Google, et al into datacenters that are mostly generating slop that’s ruining what’s left of the Internet.
- "If I'd asked people what they wanted, they would have said faster humans!"
by conartist6
0 subcomment
- I thought this was going to be about how much more intelligent horses are than AIs and I was disappointed
- yeah but machines don't produce horseshit, or do they? (said in the style of Vsauce)
- > I was one of the first researchers hired at Anthropic.
...
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
TL;DR If your work is answer questions, that can be retrieved from a corpus of data with inverted index + embedding, you'll be obsolete pretty fast.
- Yawn, another article which hand picks success stories. What about the failures? Where's the graph of flying cars? Humanoid house servant robots? 3D TVs? Crypto decentralized banking for everyone? Etc.
Anybody who tells you they can predict the future is shoveling shit in his mouth then smiling brown teeth at the audience. 10 years from now there's a real possibility of "AI" being remembered as that "stuff that almost got to a single 9 reliability but stopped there".
- [dead]
by GreenJacketBoy
0 subcomment
- [dead]
- [dead]
by Bleedblood
0 subcomment
- [dead]
by inquirerGeneral
0 subcomment
- [dead]
by adventured
7 subcomments
- It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.
As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.
I think it's a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There's no time to form powerful unions to stop what's happening, it's already far too late.
- hello faster horses
- Oh no, it's the lowercase people again.