Or the long version: "something about which no conclusions can be drawn because the proposed definitions lack sufficient precision and completeness."
Or the short versions: "Skippetyboop," "plipnikop," and "zingybang."
Go look at any production AI deployment today. Humans still review, correct, supervise. AI handles volume, humans handle judgment. Judgment is the bottleneck. You haven't replaced labor. You've moved it.
Global labor comp is ~$50T/year. The entire capex cycle is a bet that AI captures a real fraction of that. Whether you call that threshold AGI or not is irrelevant. Capital markets don't care about your definition. They care about whether labor decouples from output.
This will never happen, LLMs are already being used very unsafely, and if this HN headline stays where it is OpenAI will quietly remove their charter from their website.
I don't think it was so much the naivety of idealism, but more an adoption of idealism and related language to help market what was actually being built: a profit-first organization that's taking its true form little by little.
You cannot get real, actual AGI (the same ability to perform tasks as a human) without a continuous cycle of learning and deep memory, which LLMs cannot do. The best LLM "memory" is a search engine and document summarizer stuffed into a context window (which is like having someone take an entire physics course, writing down everything they learn on post-it notes, then you ask a different person a physics question, and that different person has to skim all the post-it notes, and then write a new post-it note to answer you). To learn it would need RL (which requires specific novel inputs) and retraining (so that it can retain and compute answers with the learned input). This would all take too much time and careful input/engineering along with novel techniques. So AGI is too expensive, time consuming, and difficult for us to achieve without radically different designs and a whole lot more effort.
Not only are LLMs not AGI, they're still not even that great at being LLMs. Sure, they can do a lot of cool things, like write working code and tests. But tell one "don't delete files in X/", and after a while, it will delete all the files in "X/", whereas a human would likely remember it's not supposed to delete some files, and go check first. It also does fun stuff like follow arbitrary instructions from an attacker found in random documents, which most humans also wouldn't do. If they had a real memory and RL in real-time, they wouldn't have these problems. But we're a long way away from that.
LLMs are fine. They aren't AGI.
Funny how timely this is, with Karpathy's Autoresearch hitting the top of HN yesterday (and this being an indication that frontier labs probably have much larger scale versions of this)
> Achieving AGI, he conceded, will require “a lot of medium-sized breakthroughs. I don’t think we need a big one.”
> At the Snowflake Summit in June 2025, Altman predicted that 2026 would mark a breakthrough when AI systems begin generating “novel insights” rather than simply recombining existing information. This represents a threshold he considers critical on the path to AGI.
Though I'm sure they'll try to change the charter before we get to that point, but yeah.
Which such project is that, though? And would it accept OpenAI's assistance?
AGI, having access to our world, is precarious as alignment with humans is never guaranteed. Having a buffering medium, aka a simulation environment where AI operates might be a better in-between solution.
A great point. I saw blinding idealism during the early days of GPT era.
“The changing goalposts of AGI and timelines. Notably, it’s common to now talk about ASI instead, implying we may have already achieved AGI, almost without noticing.”
Amen
"Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project."
I claim that currently no "value-aligned, safety-conscious project comes close to building AGI", both for the reasons
- "value-aligned, safety-conscious" and
- "close to building AGI".
So, based on this charter, OpenAI has no reason to surrender the race.
Even the quote they used questions the premise of the article
> “We basically have built AGI” (later: “a spiritual statement, not a literal one”)
Laws & regulations that needs to be created to reign in AI will undoubtedly increase the opportunity cost of training LLMs.
For some, it might be similar to the early 2000s, but I think it's just a healthy rebalance of what AI is, and how the society needs to implement this new, hardly controllable, paradigm. With this perspective, OpenAI has a lot to lose as it hasn't been able to create a moat for itself compared to, let's say, Anthropic.
I'll eat my hat after I sell you a bridge.
previous title: Based on its own charter, OpenAI should surrender the race
And that's it.
Everything beyond that is nuance.
Nuance matters, but it's not the real story, it's the side show.
- we are building Open AI - only if you have more than $10B net worth
- we are against using AI for military purposes - except when that case is allowed by government
- we are on a mission to help humanity - again, we define humanity as set of people with more than $10B net worth
- surrender? - sure, sure, we will, only to people with more than $10B net worth, they can do whatever they want to our models, we will surrender to them
Are you sure Anthropic isn't aware of this and angling for this? And are you sure what Anthropic say is really value-aligned and safety concious? The PR bit surely is working right?
> It can be debated whether arena.ai is a suitable metric for AGI, a strong case can probably be made for why it’s not. However, that’s irrelevant, as the spirit of the self-sacrifice clause is to avoid an arms race, and we are clearly in one.
No, the spirit is clearly meant for near AGI and we aren’t near AGI
- Caitlin Kalinowski, previously head of robotics at OpenAI
https://www.linkedin.com/posts/ckalinowski_i-resigned-from-o...
>claims to be some topshot data scientist
okay
One can argue that they have already achieved this. At least for short termed tasks. Humans are still better at organization, collaboration and carrying out very long tasks like managing a project or a company.