I doubt it.
And what if the technology to locally run these systems without reliance on the cloud becomes commonplace, as it now is with open source models? The expensive part is in the training of these models more than the inference.
IMHO the investors are betting on a winner-takes-it-all market and that some magic AGI will be coming out of OpenAI or Anthropic.
The questions are:
How much money can they make by integrating advertising and/or selling user profiles?
What is the model competition going to be?
What is the future AI hardware going to be - TPUs, ASICs?
Will more people have powerful laptops/desktops to run a mid-sized models locally and be happy with it?
The internet didn't stop after the dotcom crash and the AI wont stop either should there be a market correction.
But the AI providers are betting, correctly in my opinion, that many companies will find uses for LLM’s which are in the trillions of tokens per day.
Think less of “a bunch of people want to get recipe ideas.”
Think more of “a pharma lab wants to explore all possible interactions for a particular drug” or “an airline wants its front-line customer service fully managed by LLM.”
It’s unusual that individuals and industry get access to basically similar tools at the same time, but we should think of tools like ChatGPT and similar as “foot in the door” products which create appetite and room to explore exponentially larger token use in industry.
Consensus theory: If AGI then superintelligence.
AI CapEx plans are not ROI based. Rather, they are the cost of "how do I remain competitive in the race to attain AGI" coupled with conveniently deep pockets. The money is being spent because the spenders can afford it and they see it as an existential risk as much as a profit opportunity.
Maybe OP's conclusion about the headline question blunts some political opposition to data centers, but that's not the salient issue.
The issue is this: America is betting a meaningful chunk of GDP that AGI is possible. This is The Manhattan Project 2.0.
- OverUtilized/UnderCharged: doesn't matter because...
- Lead Time vs. TCO vs. IRS Asset Deprecation: The moment you get it fully built, it's already obsolete. Thus from a CapEx point of view, if you can lease your compute (including GPU) and optimize the rest of the inputs for similar then your CapEx overall is much lower and tied to the real estate - not the technology. The rest is cost of doing business and deductible in and of itself.
- The "X" factor: Someone mentioned TPU/ASIC but then there is the DeepSeek factor - what if we figure out a better way of doing the work that can shortcut the workflow?
- AGI partnerships: Right now, you see a lot of Mega X giving billions to Mega Y because all of them are trying to get their version of Linux or Apache or whatever at parity with the rest. Once AGI is settled and confirmed, then most all of these partnerships will be severed because it then becomes which company is going to get their AI model into that high prestige Montessori school and into the right ivy league schools - like any other rich parent would for their "bot" offspring.
So what will it look like when it crashes? A bunch of bland empty "warehouses" with mobile PDU's once filling all their parking lot space gone. Whatever "paradise" that was there may come back... once you bulldoze all that concrete and steel. The money will do something else like a Don McLean song.
This is not the case for AI data centers at all. The compute is a major cost in the whole build budget. Having that installed and not used is financial ruin.
There are compounding incentives for this. I see this as the most likely outcome, though it will likely be stepwise rather than gradual.
> You can already use Claude Code for non engineering tasks in professional services and get very impressive results without any industry specific modifications
After clicking on the link, and finding that Claude Code failed to accurately answer the single example tax question given, very impressive results! After all, why pay a professional to get something right when you can use Claude Code to get it wrong?
The key dynamic: X were Y while A was merely B. While C needed to be built, there was enormous overbuilding that D ...
Why Forecasting Is Nearly Impossible
Here's where I think the comparison to telecoms becomes both interesting and concerning.
[lists exactly three difficulties with forecasting, the first two of which consist of exactly three bullet points]
...
What About a Short-Term Correction?
Could there still be a short-term crash? Absolutely.
Scenarios that could trigger a correction:
1. Agent adoption hits a wall ...
[continues to list exactly three "scenarios"]
The Key Difference From S:
Even if there's a correction, the underlying dynamics are different. E did F, then watched G. The result: H.
If we do I and only get J, that's not K - that's just L.
A correction might mean M, N, and O as P. But that's fundamentally different from Q while R. ...
The key insight people miss ...
If it's not AI slop, it's a human who doesn't know what they're talking about: "enormous strides were made on the optical transceivers, allowing the same fibre to carry 100,000x more traffic over the following decade. Just one example is WDM multiplexing..." when in fact wavelength division multiplexing multiplexing is the entirety of those enormous strides.
Although it constantly uses the "rule of three" and the "negative parallelisms" I've quoted above, it completely avoids most of the overused AI words (other than "key", which occurs six times in only 2257 words, all six times as adjectival puffery), and it substitutes single hyphens for em dashes even when em dashes were obviously meant (in 20 separate places—more often than even I use em dashes), so I think it's been run through a simple filter to conceal its origin.
What about the possibility of improvements in training and inference algorithms? Or do we know we won't get any better than grad descent/hessians/etc ?
This is a kind of risk that finance people are completely blind to. Open AI won't tell them because it keeps capital cheap. Startups that must take a chance on hardware capability remaining centralized won't even bother analyzing the possibility. With so many actors incentivized to not know or not bother asking the question, there's the biggest systematic risk.
The real whiplash will come from extrapolation. If an algorithm advance shows up promising to halve hardware requirements, finance heads will reason that we haven't hit the floor yet. A lot of capital will eventually re-deploy, but in the meantime, a great deal of it will slow down, stop, or reverse gears and get un-deployed.