I look forward to the "personal computing" period, with small models distributed everywhere...
The author didn't mention them.
AI companies robbed so much data from the Internet free and without permission.
Sacrificing the interests of owners of websites.
It's not sustainable.
It's impossible for AI to go far.
This is clearly false, as is obvious to anyone who has done any software engineering. The big corps are in no shortage of capital and could just add more engineers if this were true. But we know what happens when you add more people to a project.
Rather, there are other more fundamental constraints, like the complexity of software and our ability to grasp and manipulate it. I think the argument would have been better if it focused on that. It'd be more based.
Which is a long-winded way of saying that I agree with others here that this article is full of hubris. I hope you got those chicks on Substack clapping for you, at least. Fast lane to getting laid for sure.
with AI we have the opposite problem - no friction to try it, but most people hit a wall figuring out what it's actually good for beyond parlor tricks. that's way harder to solve than bandwidth.
I've watched founders burn weeks trying to shoehorn LLMs into problems that don't need them, when a basic script would've worked. the real shift happens when someone builds something genuinely impossible before, not just faster. we're still waiting for that moment.
That's not how I remember it (but I was just a kid so I might be misremembering?)
As I remember (and what I gather from media from the era) late 80s/early 90s were hyper optimistic about tech. So much so that I distinctly remember a ¿german? TV show when I was a kid where they had what amounts to modern smartphones, and we all assumed that was right around the corner. If anything, it took too damn long.
Were adults outside my household not as optimistic about tech progress?
So, when people say something about future, they are looking into the past to draw some projections or similar trends, but they may be missing the change in the full context. The considered factors of demand and automation might be too few to understand the implications. What about political, social and economic landscape? The systems are not so much insulated to study using just a few factors.
1. the opening premise comparing AI to dial-up internet; basically everyone knew the internet would be revolutionary long before 1995. Being able to talk to people halfway across the world on a BBS? Sending a message to your family on the other side of the country and them receiving it instantly? Yeah, it was pretty obvious this was transformative. The Krugman quote is an extreme, notable outlier, and it gets thrown out around literally every new technology, from blockchain to VR headsets to 3DTVs, so just like, don't use it please.
2. the closing thesis of
> Consider the restaurant owner from earlier who uses AI to create custom inventory software that is useful only for them. They won’t call themselves a software engineer.
The idea that restaurant owners will be writing inventory software might make sense if the only challenge of creating custom inventory software, or any custom software, was writing the code... but it isn't. Software projects don't fail because people didn't write enough code.
The real parallel is Canal Mania — Britain’s late-18th-century frenzy to dig waterways everywhere. Investors thought canals were the future of transport. They were, but only briefly.
Today’s AI runs on GPUs — chips built for rendering video games, not thinking machines. Adapting them for AI is about as sensible as adapting a boat to travel across land. Sure, it moves — but not quickly, not cheaply, and certainly not far.
It works for now, but the economics are brutal. Each new model devours exponentially more power, silicon, and capital. It just doesn't scale.
The real revolution will come with new, hardware built for the job (that hasn't been invented yet) — thousands of times faster and more efficient. When that happens, today’s GPU farms will look like quaint relics of an awkward, transitional age: grand, expensive, and obsolete almost overnight.
It's falling into the trap of assuming we're going to get to the science fiction abilities of AI with the current software architectures, and within a few years, as long as enough money is thrown at the problem.
All I can say for certain is that all the previous financial instruments that have been jumped on to drive economic growth have eventually crashed. The dot com bubble, credit instruments leading to the global financial crisis, the crypto boom, the current housing markets.
The current investments around AI that we're all agog at are just another large scale instrument for wealth generation. It's not about the technology. Just like VR and BioTech wasn't about the technology.
That isn't to say the technology outcomes aren't useful and amazing, they are just independant of the money. Yes, there are Trillions (a number so large I can't quite comprehend it to be honest) being focused into AI. No, that doesn't mean we will get incomprehensible advancements out the other end.
AGI isn't happening this round folks. Can hallucinations even be solved this round? Trillions of dollars to stop computers lying to us. Most people where I work don't even realise hallucinations are a thing. How about a Trillion dollars so Karen or John stop dismissing different viewpoints because a chat bot says something contradictory, and actually listen? Now that would be worth a Trillion dollars.
Imagine a world where people could listen to others outside of their bubble. Instead they're being given tools that re-inforce the bubble.
The key variable for me in this house of cards is how long folks will wait before they need to see their money again, and whether these companies will go in the right direction long enough given these valuations to get to AGI. Not guaranteed and in the meantime society will need to play ball (also not a guarantee)
most companies are still figuring out basic workflows while we debate AGI timelines. the real value unlock happens when AI becomes boring infrastructure - like cloud did around 2015, not 2008.
Maybe it's my bubble, but so far I didn't hear someone saying that. What kind of jobs should that be, given that both forms, physical and knowledge work, will be automatable sooner or later?
If you make the context small enough, we're back at /api/create /api/read /api/update /api/delete; or, if you're old-school, a basic function
Does that comparison with the fiber infra from the dotcom era really hold up? Even when those companies went broke, the fiber was still perfectly fine a decade later. In contrast, all those datacenters will be useless when the technology has advanced by just a few years.
Nobody is going to be interested in those machines 10 years from now, no matter if the bubble bursts or not. Data centers are like fresh produce. They are only good for a short period of time and useless soon after. They are being constantly replaced.
When the railroad bubble popped we had railroads. Metal and sticks, and probably more importantly, rights-of-way.
If this is a bubble, and it pops, basically all the money will have been spent on Nvidia GPUs that depreciate to 0 over 4 years. All this GPU spending will need to be done again, every 4 years.
Hopefully we at least get some nuclear power plants out of this.
I don't expect that to cease in my lifetime.
I’m not sure that’s a certainty.
Because some notable people dismissed things that wound up having profound effect on the world, it does not mean that everything dismissed will have a profound effect.
We could just as easily be "peak Laserdisc" as "dial-up internet".
What is clear, is that we have strapped a rocket to our asses, fueled with cash and speculation. The rocket is going so fast we don't know where we're going to land, or if we'll land softly, or in a very large crater. The past few decades have examples of craters. Where there are potential profits, there are people who don't mind crashing the economy to get them.
I don't understand why we're allowing this rocket to begin with. Why do we need to be moving this quickly and dangerously? Why do we need to spend trillions of dollars overnight? Why do we need to invest half the fucking stock market on this brand new technology as fast as we can? Why can't we develop it in a way that isn't insanely fast and dangerous? Or are we incapable of decisions not based on greed and FOMO?
When this article are claiming both sides of the debate, I believe only one of them are real (the ones hyping up the technology). While there are people like me who are pessimistic about the technology, we are not in any position of power, and our opinion on the matter is basically a side noise. I think a much more common (among people with any say in the future of this technology) is the believe that this technology is not yet at a point which warrants all this investment. There were people who said that about the internet in 1999, and they were proven 100% correct in the months that followed.
And not just that, they come out with an iPhone that has _no_ camera as an attempt to really distance themselves from all the negative press tech (software and internet in particular) has at the moment.
> 1. Economic strain (investment as a share of GDP)
> 2. Industry strain (capex to revenue ratios)
> 3. Revenue growth trajectories (doubling time)
> 4. Valuation heat (price-to-earnings multiples)
> 5. Funding quality (the resilience of capital sources)
> His analysis shows that AI remains in a demand-led boom rather than a bubble, but if two of the five gauges head into red, we will be in bubble territory.
This seems like a more quantitative approach than most of "the sky is falling", "bubble time!", "circular money!" etc analyses commonly found on HN and in the news. Are there other worthwhile macro-economic indicators to look at?
It's fascinating how challenging it is meaningfully compare current recent events to prior economic cycles such as the y2k tech bubble. It seems like it should be easy but AFAICT it barely even rhymes.
words to live by...
That is the real dial-up thinking.
Couldn't AI like be their custom inventory software?
Codex and Claud Code should not even exist.
>We’re in the 1950s equivalent of the internet boom — dial-up modems exist, but YouTube doesn’t.
A single image generally took nothing like a minute. Most people had moved to 28.8K modems that would deliver an acceptable large image in 10-20 seconds. Mind you, the full-screen resolution was typically 800x600 and color was an 8-bit palette… so much less data to move.
Moreover, thanks to “progressive jpeg”, you got to see the full picture in blocky form within a second or two.
And of course, with pages was less busy and tracking cookies still a thing of the future, you could get enough of a news site up to start reading in less time that it can take today.
One final irk is that it’s little overdone to claim that “For the first time in history, you can exchange letters with someone across the world in seconds”. Telex had been around for decades, and faxes, taking 10-20 seconds per page were already commonplace.
Because we all know how essential the internet is nowadays.
We’re at the end of Moore’s Law, it’s pretty reasonable to assume. 3nm M5 chips means there are—what—a few hundred silicon atoms per transistor? We’re an order of magnitude away from .2 nm which is the diameter of a single silicon atom.
My point is, 30 years have passed since dial up. That’s a lot of time to have exponentially increasing returns.
There’s a lot of implicit assumption that “it’s just possible” to have a Moore’s Law for the very concept of intelligence. I think that’s kinda silly.
The Claw allows a garbage truck to be crewed by one man where it would have needed two or three before, and to collect garbage much faster than when the bins were emptied by hand. We don't know what the economics of such automation of (physical) garbage collection portend in the long term, but what we do know is that sanitation workers are being put out of work. "Just upskill," you might say, but until Claw-equipped trucks started appearing on the streets there was no need to upskill, and now that they're here the displaced sanitation workers may be in jeopardy of being unable to afford to feed their families, let alone find and train in some new marketable skill.
So no, we're in the The Claw era of AI, when business finds a new way to funge labor with capital, devaluing certain kinds of labor to zero with no way out for those who traded in such labor. The long-term implications of this development are unclear, but the short-term ones are: more money for the owner class, and some people are out on their ass without a safety net because this is Goddamn America and we don't brook that sort of commie nonsense here.
I mean, sort of, but the fiber optics in the ground have been upgraded several by orders of magnitude of its original capacity by replacing the transceivers on either end. And the fiber itself has lasted and will continue to last for decades.
Neither of those properties is true of the current datacenter/GPU boom. The datacenter buildings may last a few decades but the computers and GPUs inside will not and they cannot be easily amplified in their value as the fiber in the ground was.
I'm no expert but I can't help feeling there's lots of things they could be doing vastly better in this regard - presumably there is lots to do and they will get around to it.