No one (at least no serious person) is saying ChatGPT is Immanuel Kant or Ernest Hemingway. The fact that we still have sherpas doesn't make trains any less useful or interesting.
AlphaZero was a special/unusual case, I would say an outlier.
FSD is still not ready, but people have seen it working for ten years, slowly climbing up the asymptote, but still not reaching human level driving, and it may take a while.
I use AI models for coding every day, I am not a luddite, but I don't feel the AGI, not at all, what I am seeing is a nice tool that is seriously over-hyped.
> If it was a life or death decision, would you trust the model? Judgement, yes, but decision? No, they are not capable of making a decision, at least important ones.
A self-driving car with a vision-language-action model inside buzzes by.
> It still fails when it comes to spatial relations within text, because everything is understood in terms of relations and correspondences between tokens as values themselves, and apparent spatial position is not a stored value.
A large multimodal model listens to your request and produces a picture.
> They'll always need someone to take a look under the hood, figure out how their machine ticks. A strong, fearless individual, the spanner in the works, the eddy in the stream!
GPT‑5.3‑Codex helps debug its own training.
Something Big Is Coming (Annotated by Ed Zitron) [pdf] - https://news.ycombinator.com/item?id=47007991 - Feb 2026 (31 comments)
Something Big Is Happening - https://news.ycombinator.com/item?id=46973011 - Feb 2026 (74 comments)
This article has a confrontational title, but the point made here seems to not be incompatible with the original...the author is confronting the FUD directly, which is understandable but perhaps not quite as useful as refuting the core thesis, which is that something you cannot afford to ignore is happening.
In fact, both these people seem to be in agreement that you need to keep an eye on this ball, they just have a "panic" versus "don't panic" framing. Should you panic in an emergency? Research says no [2].
[0] https://shumer.dev/something-big-is-happening
[1] https://en.wikipedia.org/wiki/Fear,_uncertainty,_and_doubt - note the original author is an AI founder
\ | /
--(_) --
.' . '.
/ . . \
| . |
\ . . /
'. . .'
'v'Something big is happening (97 points, 77 comments)
But I have personally repeatedly used AI instead of humans across domains.
AI displacement isn’t a prediction. It’s here.