Agreed, these things all failed to live up to the hype.
But these didn't:
Electricity, cheap computing, calculators, photography, the internet, the steam engine, the printing press, tv, cars, gps, bicycles...
So you can't really start an article by picking inventions that fit your narrative and ignoring everything else.
This is super scary stuff for an ADHDer like me.
I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.
My daily todos are now being handled by NanoClaw.
These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.
But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.
Here we are a few decades later, and we don't see business units using Word's built-in dictation feature to write documents, right? Funny how that tech seems to have barely improved in all that time. And, despite dictation being far faster than typing, it's not used all that often because.. the error rate is still too high for it to be useful, because errors in speech-to-text are fundamentally an unsolvable problem (you can only get so far with background noise filtering and accounting for accents etc).
I see the parallel in how LLM hallucinations are fundamentally an unsolvable component of transformers-based models, and I suspect LLM usage in 20 years will be around the level of speech-to-text today: ubiquitously in the background, you use it here and there to set a timer or talk to a device, but ultimately not useful for any serious work.
It really is 'different', though, in the same way the Internet was.
It took about 20 years (ie: since The World ISP) for the Internet to work its way into every facet of life. And the dot com bubble popped half-way through that period of time.
AI might 'underwhelm' for another five or ten years. And then it won't. Whether that's good or bad, I don't know.
But like all the previous hype, most of the people that were the loudest won't say they were wrong, and they'll move to the next thing, pretending like they never were the one that portrayed AI as the holy Graal.
I was firmly in the camp that blockchain was not a viable solution to any problem, and that NFTs sound stupid. I think AI is much different than that list. So, there goes your argument?
It's just an interface problem. The VT100 didn't change the world overnight either.
For what it’s worth, not a single other technology in the list made any sort of impact on my work. For better or worse, LLMs did.
Well, okay, quantum computing actually affected me a lot because I worked at a quantum hardware manufacturer, but that’s different.
In certain professions it wasn't uncommon to spend $3k/year or more (in 2026 dollars) on software licenses - Adobe CS4/CS6 etc... with a handful of products easily pushed over that. In other professions. All sorts of other jobs require people to pay for their own tools as well.
What I get for $150/month I'd easily pay twice or more for that if I had to, even out of pocket if I had to for current functionality - even if was frozen in time. I'd imagine many, if not most, readers on hacker news would do the same. Multiplied across the entire population of software developers (and broader population using AI) - I think it's clear to see what AI is worth in a grounded way.
I’ve never heard half of the things and the other half is mostly consumer electronics or specific product names. The closest example here is Quantum Computing, which is also a serious technology in development. I think for the OP these are all tech buzzwords that he invests in without understanding what they really are. That’s why he thinks all these unrelated things are the same.
> 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.
...conveniently doesn't list a bunch of hyped tech that hasn't failed:
> microchips, PCs, the internet, ecommerce, cloud, EVs, 5G
...and presents this as evidence that the current hyped tech (AI) will fail:
> Seems like you say that about every passing fancy - and they all end up being utterly underwhelming.
When the article needs to construct disingenious arguments, I'm not interested in its conclusion.
But wait! If you actually read to the end, there's a plot twist!
> The ideology of "winner takes all" is unsustainable and not supported by reality.
Who said anything about winner takes all? You just burned a "this time is different" straw man and then conclude that "winner takes all" is not realistic?
At this moment I'm wondering if the article was in fact written by a quantized 8B LLM. Surely people don't do such non-sequiturs and then expect to be taken seriously.
But of course not. This is not an argument. This is preaching to the choir.
Preach, brother, preach.
AI concerns me, it feels like it will come faster and at least as impactful on workers as the Industrial Revolution. The latter at least occurred over centuries and didn’t apply globally at the same time.
Is this round hype? Probably. Are we heading for a y2k crash? Probably.
However those who laughed at the dotcom boom and doubled their holdings in department stores and blockbuster video didn’t do well in the long run.
"All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists. I don't doubt that AI will be a part of the future - but it is obviously just going to be one of many technology which are in use.
> No enemies had ever taken Ankh-Morpork. Well technically they had, quite often; the city welcomed free-spending barbarian invaders, but somehow the puzzled raiders found, after a few days, that they didn't own their horses any more, and within a couple of months they were just another minority group with its own graffiti and food shops.
- Terry Pratchet's Faust Eric"
We're in that part of turbulence where we don't know if the floating leaf is going to go left or right.
The people who will have the hardest time with this transition are those who go all in on a specific prediction and then discover they were wrong.
If you want to avoid that, you can try very very hard to just not be wrong, but as I said, I don't think that's possible.
Instead, we need to be flexible and surf the wave as it comes. Maybe AI fades away like VR. Or maybe it reshapes the world like the internet/smartphones. The hardest thing to do right now, when everyone is yelling, is to just wait and see what happens. But maybe that's the right thing to do.
[p.s.: None of this means don't try to influence events. If you've got a frontier model you've been working on, please try to steer us safely.]
abstract away a lot of the mechanics of working with data/information.
helpful, when literacy seems to be trending in a downward direction.
>3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX
It's quite a different thing, more on the level of the evolution of life on earth and quite unlike all that junk.
Deep disconnect from reality.
The problem is that this time is 20% different, not the 80% people are implying it is. So the same things that killed it last time will kill it again, unless that 20% has gotten us up some stairstep we got stuck on last time. But then the next thing will get us and we will go back to a new and improved version of the old thing.
Yeah I know what you are, don't try to pretend.
If you speak to industry professionals and retain a healthy scepticism, you don't have to look far to find people that absolutely do not believe the marketing.
Quite frankly I like that advances in say quantum computing are publically announced. The hype around what that means for society and our view of the universe is probably where you want to put on that reserved scepticism hat.
Similarly smart glasses were and are a thing, but society is rightly apprehensive about the impact, so the hype has dropped off.
https://www.youtube.com/watch?v=SZFhFGpDWGw
"Today, I'm speaking with Stephen C. Meyer, Director of The Discovery Institute's Center for Science and Culture, and and George D. Montañez, Director of the AMISTAD Lab at Harvey Mudd College–both of whom are extremely knowledgable on the topic of artificial intelligence. During the course of our conversation, they discuss the asymmetry between human intelligence & AI, the inability of AI to ascribe meaning to raw data, and the limitations of large language models. The real question though is: are we screwed? Let's find out."
There are two kinds of waves. The ones that don't require collective belief in them to succeed, and those that do.
The latter are kinds like crypto and social media. The former is mobile...and AI.
If no one else in the world had access to AI except me, I would appear superhuman to everyone in the world. People would see my level of output and be utterly shocked at how I can do so much so quickly. It doesn't matter if others don't use AI for me to appreciate AI. In fact, the more other people don't use AI, the better it works out for me.
I'm sympathetic to people who feel like they are against it on principle because scummy influencers are talking about it, but I don't think they're doing themselves any favors.
• Self-reinforcing chemical metabolisms
• DNA as a template for reproduction
• Multi-cell cooperation
• Multi-cell specialization
• Nerve cells
• Neural ganglia
• Nervous systems
• Brains
• Self-awareness
• Language
• Written language
• Books
• Printing press
• Wireless communication
• Transistors
• Digital memory
• Computer processors
• Networking
• Internet
• AI
Answer: They all introduced dramatic qualitative and quantitative improvements in the efficiency, effectiveness, interaction, speed, reliability, flexibility, adaptability, and application of information.
AI is on its way to being self-designed. It is already assisting in its own design, speeding up work, by doing "mundane" things that would otherwise take people more time to do.
Intelligence has not been an S-curve technology.
AI, the systematic automation, manufacturing and increasingly recursive improvement of intelligence, is not an S-curve technology.
Actually IT IS different. Actually if they manage to create a viable small nuclear reactors or Quantum computers the world will change like it changed with the Watt thermal engine.
Why he is not talking about the Internet, trains, electricity, nuclear bombs, rockets,aviation or engines? Because they worked, like AI works today.
All of them were bubbles at the time and they changed the world forever. AI is changing the world AND it is a bubble.
AI is here to stay. It will improve and it will have consequences. The fact that a robot could do things with its hands is actually significant, whenever you like it or not.
"This time will be different," they said about the Metaverse, ignoring the vast tranches of MUCKs, MUDs, MMOs, LSGs, and repeated digital real estate gold rushes of the past half-century. Billions burned on something anyone who played Second Life, Entropia, FFXIV, EQ2, VRChat, or fucking Furcadia could've told you wasn't going to succeed, because it wasn't different, it just had more money behind it this time.
"NFTs are different", as collectors of trading cards, art prints, coins, postage stamps, and an infinite glut of collectibles looked at each other with that knowing, "oh lord, here we go" glance.
"Crypto is different", as those who paid attention to history remembered corporate scrip, gift cards, hedge funds, the S&L crisis, Enron, the MBS crisis, and the multitude of prior currency-related crises and grifts bristled at the impending glut of fraud and abuse by those too risky to engage in traditional commerce.
And thus, here we are again. "This time is different", as those of us who remembered the code generators of yore pollute our floppy drives and salesgrifts convinced our bosses that their program could replace those expensive programmers roll our eyes at the obvious bullshit on naked display, then vomited from stress as over a trillion dollars was diverted from anything of value into their modern equivalent - with all the same problems as before.
I truly hate how stupidly people with money actually behave.
Also, every single close friend of mine makes some use of LLMs, while none of them used any the overhyped technologies listed. So you need a specially strong argument to group them together.
Effectively, it’s a statement saying nothing can ever be profoundly different, because people have said it before and been wrong.
Lazy.
Is just propaganda...
Iran is 2 weeks from a nuclear weapon We obliterated Iran's nuclear dreams
Russia is fighting with shovels Russia is on the verge of swarming Europe
What would Joost Meerloo say about it, I wonder.
New things are happening and it's exciting. "AI bad" statements without examples feel very head-in-sand.
Covid was different -- people dismissed it initially saying it was going to be like the 2009 Swine flu or the seasonal chicken flu we see on media.
The iPhone was different -- many columnists said it was just a fancier PDA and that Palm already had the market.
The 2008 crisis was different -- the signs of a housing bubble were present but were dismissed. The derivatives made it different and it imploded.
There are times when things are actually different and you should be able to identify them. AI is one of them.
I don't even need to elaborate much, as a programmer it's clear how this a game-changer. We are moving past the era when programs were just predictable if/else chains with regex to a world where you can accept non-deterministic, never-before seen inputs and have them to be interpreted accurately. Just like the Internet added another "dimension" to computer applications, AI is now adding another "dimension" previously unreachable.*
* Just as you could make a big local LAN before the Internet, it's obvious that we had past incarnations of the current technology that gave some taste of that dimension, but did not fully "unlock" it.
This time, it's truly different.
I have unlimited derision for morally spineless worms who disingenuously make it out to be more than it is-- looking at Dario, Sam, and the silly CEO of Control AI. Also, I hate to say it but Andrej Karpathy on twitter-- he's a worthless follow now. I can't blame, but am daily exasperated by media figures who can't help but go with what they hear prominent individuals in the field say.
If I were a junior now, and less confident, I would be abandoning my career in this climate.
LLMs are not going away. They will get a little better than they are now, and new model paradigms will come around at some point. But this tale of massive redundancy and skyrocketing unemployment is not going to come from LLMs.
This is the only reason why I cannot wait for a pop, and pray to God that it comes sooner than later. I just want to feel good about technology again. I want to tinker, to feel positivity, to know how sustainable the tools I'm using actually are.
I don't want to be reminded daily of the disgusting reality of unbridled capitalism.
for all the things you listed, less than 1000 people are using it, with AI we're clearly not finished with the gartner hype cycle, but the back end is going to be over a billion users.
Internet - this time is different
iPhone - this time is different
Failure to appreciate changes in AI will have left you calling every shot wrong over the past 5 years. While AI models continue to improve at an exponential rate, you'll cling to your facile maxims like "dude it's just predicting the next token it isn't real intelligence".
I was right that blockchain was BS and all the "not sure about Bitcoin, but blockchain will be big" people were idiots.
I've been right for the last couple of years on AI and that people were vastly underestimating it when it came to it's coding potential. And I put my money where my mouth was here. In 2021 when GPT-3 came out I decided almost immediately I needed to invest a significant amount of my net worth in Google simply as hedge against AI destroying knowledge work jobs. Which at the time I thought was probably going to happen around 2030 not realising how far LLMs could go with reasoning.
I'm not particularly intelligent ("only" top 1-2% IQ), but my ability to predict the future is very good. If you have a skill you're unusually good you might relate to how it's strange other people find it so hard to do that thing you find kinda easy. For me that's predicting things and computers.
Since I was a young teen I have been worrying about AI. Most of my IRL best friends I have made from talking about AI risk in 2010s when I was studying AI.
Admittedly I got some of the details wrong back then. In 2010 I thought a lot of manual labour jobs would be easier to automate first – warehouse work, mail, taxis, buses, trains, etc. I worried primarily about the economic and political ramifications, and much less about ASI scenario (at least in this half of the century). But I think still I got the general timeframes and direction right. This was the decade I was concerned about.
I'm so scared right now... My whole life I've had nightmares about AI. I know there are some people who talk about how AI is an existential risk, but it feels like they don't internalise it like I do. They're not prepping like me for one, not that you really can prep for what's coming. If they're concerned why don't they have the nightmares of the omnipresent AI which you can't out think or punch to protect those you love? AI is so powerful in the scariest ways. Super viruses, mass surveillance and control, mind reading, unimaginable sci-fi weapons. It's like a horror story, but suddenly real.
I am an OG AI doomer, but until the last few months I've at least always had some doubt in my mind about whether I'm right, perhaps not about the risk of AI broadly, but about whether we'd actually be able to develop highly capable AIs while I still have a lot of my life ahead of me.
In my opinion this time is different, and what I've been worrying about for the last couple of decades is now here.
We are collectively the indigenous peoples of America and the Europeans have just arrived in the new world. The risk vectors are now endless and how this all plays out is hard to know exactly. What we do know is that the majority of ways this will play out are bad, and some are incomprehensibly bad. Some may achieve status and wealth in the near-term, but longer-term we're all dead, or worse.
I always worry these comments make me sound like a lunatic, I think I am, but I hope I am. I hope you will all forgive me, but I just need to shout about this tonight while I still can. We need to stop this insanity. Data centers need to be nuked. You may doubt me now, but in time you will understand. Hopefully I won't be around to say I told you so. Please make the best of the time we have left.
Love the Sir Terry reference.