Anyway, here's something I've recently build that shows the HN consensus when it comes to AI-Coding (spoiler: they say it's quite good): https://is-ai-good-yet.com/ Is AI “good” yet? – A survey website that analyzes Hacker News sentiment toward AI coding.
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
It shows a remarkably consistent curve for AI completing increasingly difficult coding tasks over time. In fact, the curve is exponential, where the X axis is time and the Y axis is task difficulty as measured by how long a human would take to perform the task. The current value for 80% success rate is only 45 minutes, but if it continues to follow the exponential curve, it will only take 3 years and change to get to a full 40 hour human work week's worth of work. The 50% success rate graph is also interesting, as it's similarly exponential and is currently at 6 hours.
Of course, progress could fall off as LLMs hit various scaling limits or as the nature of the difficulty changes. But I for one predicted that progress would fall off before, and was wrong. (And there is nothing saying that progress can't speed up.)
On the other hand, I do find it a little suspicious that so many eggs are in the one basket of METR, prediction-wise.
People talk about AGI because this is how corporate marketing will create AGI, even if AGI is not near to what we could call possible.
But this is how things work now, corporate marketing says what is real and what is not.
Then, in March 2023, with GPT-4, I said that we'll have AGI only ten years later, and the progress in the last few years (multimodal stuff, reasoning, coding agents) hasn't changed this view.
Is it perfect? No. Far from it. Is it useful in some, and in the future many situations, Yes.
I think a lot of confusion with skeptics is they think - oh someone's invented the LLM algorithm but it's not that good - what's the big deal?
The people who think it's coming eg. Musk, Altman, Kurzweil, the Wait Not Why guy and myself tend to think of it more coming down to hardware - the brain's a biological computer and as computer hardware get's faster each year it'll overtake at some stage. The current backprop algo was invented around 1982. It works now because hardware.
Also the present algorithms are a bit lacking but now we have the hardware to run better algorithms, billions of dollars and many of the best minds are getting thrown at that. Before the hardware was there there wasn't much financial motivation to do so. So I think things will advance quite quickly there.
(Wait But Why thing from eleven years ago. Has cartoons. Predicted human level about 2025 https://waitbutwhy.com/2015/01/artificial-intelligence-revol...
Moravec's 1997 paper "When will computer hardware match the human brain?" Quite science based - predicted "required hardware will be available in cheap machines in the 2020s" https://jetpress.org/volume1/moravec.pdf
And here we are.)
Anyone still saying they'll reach AGI is pumping a stock price.
Separately and unrelated, companies and researchers are still attempting to reach AGI by replacing or augmenting LLMs with other modes of machine learning.
idk ... even sam altman talked a lot about AGI *) recently ...
*) ads generated income
*bruhahaha* ... ;^)
just my 0.02€
As far as I can tell, HN defines an AGI as something that can do all the things a human can do better than a human. Or to put it another way if there is something the AGI can't do better than a human expert, then it will be loudly pointed to as evidence we haven't developed a true AGI yet.
Meanwhile I'm pretty sure the AI firms are using a very simple definition of AGI to justify their stock price: an AGI is an AI that can create other AI's faster / more cheaply than their own engineers can. Once that barrier is broken you task the AGI with building a better version it itself. Rinse, lather and repeat a few times, and they dominate the market with the best AI's. Repeat many more times and the universe becomes paperclips.
Having said that, I could not care less about AGI and don't see how it's any relevant to what I wanna do with AI.
Part of the problem with LLMs in particular is ambiguity -- this is poisonous to a language model. And English in particular is full of it. So another potential that is being explored is translating everything (with proper nuance) to another language that is more precise, or by rewriting training data to eliminate any ambiguities by using more exact English.
So there are ideas and people are still at it. After all, it usually takes decades to fully exploit any new technology. I don't expect that to be any different with models.
> I don't know if the investments in AI are worth it but am I blind for not seeing any hope for AGI any time soon.
> People making random claims about AGI soon is really weakening my confidence in AI in general.
The "people" that are screaming the loudest and making claims about AGI are the ones that have already invested lots of money into hundreds of so-called AI companies and then create false promises about AGI timelines.
Deepmind was the one that took AGI seriously first which it actually meant something until it became meaningless, when every single AI company after OpenAI raised billions in funding rounds over it.
No one can agree as to what "AGI" really means, It varies depending who you ask. But if you look at the actions made by these companies invested in AI, you can figure out what the true definition converges to, with some hints [0].
But it is completely different to what you think it is, and what they say it is.