What that means is that if you work in a certain context, for a while you keep seeing AI get a 0 because it is worse than the current process. Behind the scenes the underlying technology is improving rapidly, but because it hasn’t cusped the viability threshold you don’t feel it at all. From this vantage point, it is easy to dismiss the whole thing and forget about the slope, because the whole line is under the surface of usefulness in your context. The author has identified two cases where current AI is below the cusp of viability: design and large scale changes to a codebase (though Codex is cracking the second one quickly).
The hard and useful thing is not to find contexts where the general purpose technology gets a 0, but to surf the cusp of viability by finding incrementally harder problems that are newly solvable as the underlying technology improves. A very clear example of this is early Tesla surfing the reduction in Li-ion battery prices by starting with expensive sports cars, then luxury sedans, then normal cars. You can be sure that throughout the first two phases, everyone at GM and Toyota was saying: Li-ion batteries are totally infeasible for the consumers we prioritize who want affordable cars. By the time the technology is ready for sedans, Tesla has a 5 year lead.
IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.
Claude code is incredible. Where I work, there are an incredible number of custom agents that integrate with our internal tooling. Many make me very productive and are worthwhile.
I find it hard to buy in to opinions of non-SWE on the uselessness of AI solely because I think the innovation is lagging in other areas. I don’t doubt they don’t yet have compelling AI tooling.
They think that if an engineer makes $100k, then making a machine that produce the work of 100 million of them, that machine would be worth $10/T per year. This certainly wouldn't be the case as the supply and demand would dictate that as cost goes down, there's going to be more demand, but not to an infinite degree, and the overall contribution to the economic output would probably be within 2x of what we have today, it's just that something that used to cost a lot, is suddenly very cheap and widely available.
There'd be an economic bottleneck somewhere else. I think most people nowadays understand that A: technology in general has hit diminishing returns and B: it has gotten increasingly sinister overtones.
This is the crux of the OP's argument, adding in that (in the meantime) the incumbents and/or bad actors will use it as a path to intensify their political and economic power.
But to me the article fails to:
(1) actually make the case that AI's not going to be 'valuable enough' which is a sweeping and bold claim (especially in light of its speed), and;
(2) quantify AI's true value versus the crazy overhyped valuation, which is admittedly hard to do - but matters if we're talking 10% of 100x overvalued.
If all of my direct evidence (from my own work and life) is that AI is absolutely transformative and multiplies my output substantially, AND I see that that trend seems to be continuing - then it's going to be a hard argument for me to agree with #1 just because image generation isn't great (and OP really cares about that).
Higher Ed is in crisis; VC has bet their entire asset class on AI; non-trivial amounts of code are being written by AI at every startup; tech co's are paying crazy amounts for top AI talents... in other words, just because it can't one-shot some complex visual design workflow does not mean (a) it's limited in its potential, or (b) that we fully understand how valuable it will become given the rate of change.
As for #2 - well, that's the whole rub isn't it? Knowing how much something is overvalued or undervalued is the whole game. If you believe it's waaaay overvalued with only a limited time before the music stop, then go make your fortune! "The Big Short 2: The AI Boogaloo".
My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain.
This might be the money quote, encapsulating the difference between people who say their work benefits from LLMs and those who don't. Expecting it to one-shot your entire module will leave you disappointed, using it for code completion, generating documentation, and small-scale agentic tasks frees you up from a lot of little trivial distractions.That can’t be good
What is the value of technology which allows people communicate clearly with other people of any language? That is what these large language models have achieved. We can now translate pretty much perfectly between all the languages in the world. The curse from the tower of Babel has been lifted.
There will be a time in the future, when people will not be able to comprehend that you couldn't exchange information regardless of personal language skills.
So what is the value of that? Economically, culturally, politically, spiritually?
I wish more AI skeptics would take this position but no, it's imperative to claim that it's completely useless.
In this case, the reaction is already visible: more interest in decentralized systems, peer-to-peer coordination, and local computing instead of cloud-centric pipelines. Many developers have wanted this for years.
AI companies are spending heavily on centralized infrastructure, but the trend does not exclude the rise of strong local models. The pace of progress suggests that within a few years, consumer hardware and local models will meet most common needs, including product development.
Plenty of people are already moving in that direction.
Qwen models run well locally, and while I still use Claude Code day-to-day, the gap is narrowing. I'm waiting on the NVIDIA AI hardware to come down from $3500 USD
We are a decade or two in to having massive video coverage, such that you are probably on someone's camera much of your day in the world, and video feeds that are increasingly cloud hosted.
But nobody could possibly watch all that video. Even cameras specifically controlled by the police, it had already outstripped the ability to have humans monitoring it. At best you could refer to it when you had reason to think there'd be something on it, and even that was hugely expensive to human time.
Enter AI. "Find where Joe Schmoe was at 3:30pm yesterday and show me the video" "Give me a written summary of all the cars which crossed into the city from east to west yesterday afternoon." "Give me the names of everyone who entered the convenience store at 2323 Monument St last week." "Give me a written summary of Sue Brown's known activities in November."
The total surveillance society is coming.
I think it will be the biggest impact AI has on society in retrospect. I, for one, am not looking forward to it.
> But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.Lately I’ve been finding LLM output to be hit and miss, but at the same time, I wouldn’t say they’re useless…
I guess the ultimate question is - if you’re currently paying for an LLM service, could you see yourself sometime in the future disabling all of your accounts? I’d bet no!
In this article we see a sentiment I've often seen expressed:
> I doubt the AGI promise, not just because we keep moving the goal posts by redefining what we mean by AGI, but because it was always an abstract science fiction fantasy rather than a coherent, precise and measurable pursuit.
AGI isn't difficult at all to describe. It is basically a computer system that can do everything a human can. There are many benchmarks that AI systems fail at (especially real life motor control and adaptation to novel challenges over longer time horizons) that humans do better at, but once we run out of tests that humans can do better than AI systems, then I think it's fair to say we've reached AGI.
Why do authors like OP make it so complicated? Is it an attempt at equivocation so they can maintain their pessimistic/critical stance with a effusive deftness that confounds easy rebuttal?
It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?
Yes... I don't think the current process of using a diffusion model to generate an image is the way to go. We need AI that integrates fully within existing image and design tools, so it can do things like rendering SVG, generating layers and manipulating them, the same as we would with the tool, rather than one-shot generating the full image via diffusion.
Same with code -- right now, so much AI code gen and modification, as well as code understanding, is done via raw LLM. But we have great static analysis tools available (ie what IDES do to understand code). LLMs that have access to those tools will be more precise and efficient.
It's going to take time to integrate LLMs properly with tools. And train LLMs to use the tools the best way. Until we get there, the potential is still more limited. But I think the potential is there.
The whole word is not only now a buzzword too but one that thinly tries to disguise some underlying strategies. And it is also a bubble, part of which is currently breaking - you can see this at the stock market.
I really think society overall has to change. I know this is wishful thinking, but we can not afford those extra-money to a few superrich while inflation skyrockets. This is organised theft. AI is not the only troublemaker of course; a lot of this is a systemic problem and how markets work, or rather don't work. But when politicians are de-facto lobbyists and/or corrupt, then the whole model of a "free" market breaks away in various ways. On top of finding jobs becoming harder and harder in various areas.
Bubble aside, this could be the most destructive effect of AI. I would add to this that it is also destroying creativity, because when you don't know whether that "amazing video clip" was actually created by a human or an AI, then it's no longer that amazing. (To use a trivial example, a video of a cat and dog interacting in a way that is truly funny if it were real, and goes viral, but that means nothing if was AI-generated.)
What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.
It's also the vision that we will reach a point to where _any task_ can be fully automated (the ultimate promise of AGI). That provides _any business with enough capital_ to increase profits significantly by replacing humans with AI-driven machines.
If that were to happen, the impact on society would be absolutely devastating. It will _not_ be a matter of "humans will just find other jobs to do, just like they used to be farmers and then worked in factories". Because if the promise is true, then whatever "new job" that emerges could also be performed better by an AI. And the idea that "humans will be free to engage in the pursuits they love and enjoy" is bonkers fantasy as it is predicated on us evolving into a scarcity-free utopia where the state (or more likely pseudo-states like BigCorp) provide the resources we need to live without requiring any exchange of labor. We can't even give people SNAP.
Not for Elon, apparently.
Seems like AI killed blockchain just like the war in Ukraine killed COVID.
What's happening now is already pretty incredible given the understanding that we're basically still at the 'chat bot' stage for most people. The idea of agency is still very, very recent, but it's understandable that most people (particularly non SDs) are not impressed.
It's easy to look at the present and be cynical. If it is only able to solve your problem 95% of the time, you still can't trust it. I think the bets are really about how far we are from 99%, even for random stuff. The fact that a chatbot that was never explicitly trained to, just by predicting next probable tokens is wild. The pace of improvement in the past 5 years has been dizzying.
I'm not out here trying to put a dollar amount on it. But certainly, there is going to be a lot of money to be made. Of course it's a front for money and power. But like... isn't that the point of a corporation?
If you just wanted land, water, and electricity, you could buy them directly instead of buying $100 million of computer hardware bundled with $2 million worth of land and water rights. Why are high end GPUs selling in record numbers if AI is just a cover story for the acquisition of land, electricity, and water?
The hype of AI is to sell illusion to naive people.
It is like create a hammer that nails by itself... like cars that choose the path by itself.
So stop thinking AI is intelligent... it is merely an advanced tool that demands skill and creativity like any other. Its output is limited to the hability of its user.
The worry should be the amount of resources used to vanity (hammers into the newborn hands) or the nails in the wrong place (viral fake content targeted to unaware people).
Like in Industrial Revolution people got reduced to screw tighteners, mind will be reduced to bad prompters expecting wonders and producing bad content or the same. A step back in civilization except for the money makers and thinkers until AI revolution gives birth to its Karl Marx.
Investment mechanically _causes_ profits, and if you're as big as big tech is then some of that profit will be yours. In the end stupid investment will end badly, but until it actually plays out it can very much be rational for _everyone_ involved; Even if none of them are lying about anything.
Bubbles probably don't even have to hurt after the fact if the government is willing to support demand when things go south. The real cost is in the things we could have done instead. At least GPUs are genuinely useful (especially with the end of Moore's law), Energy investment is never a bad thing in the end, and those assets have very long useful lives.
I think that what is really behind the AI bubble is the same thing behind most money, power, and influence: land and resources. The AI future that is promised, whether to you and me or to the billionaires, requires the same thing: lots of energy, lots of land, and lots of water. Datacenters that outburn cities to keep the data churning are big, expensive, and have to be built somewhere. The deals made to develop this kind of property are political — they affect cities and states more than just about any other business run within their borders.
Perhaps govrenments should add clauses on the contracts they make to avoid this big power imbalance from happening.
The article literally starts with hyperbole and not being charitable at all. I'm sure there are many arguments for why AI will bring doom and gloom, but outright being dishonest in the first 7 words of the article will put off the people you actually want to read this article.
What happened to well-research and well-argued points of views? Where you take good faith arguments into account, and you don't just gaslight and strawman your way into some easy to make points for the purpose of being "sharable" on social media?
Most of this feels like people trying to get rich off VC money — and VCs trying to get rich off someone else’s money.
> Again, I think that AI is probably just a normal technology, riding a normal hype wave. And here’s where I nurse a particular conspiracy theory: I think the makers of AI know that.
I think those committing billions towards AI know it too. It's not a conspiracy theory. All the talk about AGI is marketing fluff that makes for good quotes. All the investment in data centers and GPU's is for regular AI. It doesn't need AGI to justify it.
I don't know if there's a bubble. Nobody knows. But what if it turns out that normal AI (not AGI) will ultimately provide so much value over the next couple decades that all the data centers being built will be used to max capacity and we need to build even more? A lot of people think the current level of investment is entirely economically rational, without any requirement for AGI at all. Maybe it's overshooting, maybe it's undershooting, but that's just regular resource usage modeling. It's not dependent on "coding consciousness" as the author describes.
And yet here we are.
First of all this AI stuff is next level. It's as great, if not greater than going to space or going to the moon.
Second the rate at which is improving makes it such that the hype is relevant and realistic.
I think what's throwing people off are two things. First people are just over exposed to AI. So the overexposure is causing people to feel AI is boring and useless slop. Investments are heavy into AI but the people who throw that money around are a minority, overall the general public is actually UNDER hyping AI. Look at everyone on this thread. Everyone and I mean Everyone isn't overly optimistic about AI. instead the irony is... Everyone and I mean everyone again strangely thinks the world is overhyped about AI and they are wrong. This thread and practically every thread on HN is a microcosm of the world and the sentiment is decidedly against AI. Think about it like this, if Elon Musk invented a car that cost 1$ and this car could travel at FTL speeds to anywhere in the universe, than interstellar travel will be routine and boring within a year. People will call it overhyped.
Second the investment and money spent on AI is definitely overhyped. Right? Think about it. If we quantify the utility and achievement of what AI can currently do and what it's projected to achieve the math works out. If you quantify the profitability of AI the math suddenly doesn't work out.
>I’m more than open to being wrong;
Doubtful.
>That’s quite a contradiction. A datacenter takes years to construct. How will today’s plans ever enable a company like OpenAI to catch up with what they already claim is a computational deficit that demands more datacenters?
Its difficult to steelman such a weird argument. If a deficit cant be remedied immediately, it should never be remedied?
This is literally how capex works. You purchase capacity now, based on receiving it, and the rewards of having it, in the future.
>And yet, these deals are made. There’s a logic hole here that’s easily filled by the possibility that AI is a fitting front for consolidation of resources and power.
No you just made some stuff up, and then suggested that your own self inflicted confusion might be better explained with some other stuff you made up.
>Globalism eroded borders by crossing them, this new thing — this Privatism — erodes them from within.
What? Its called Capitalism. You dont need a new word for it every 12 months. Emotive words like "erosion" say nothing but are just targeted at like, stirring people up. Demonstrate the erosion.
>Remember, datacenters are built on large pieces of land, drawing more heavily from existing infrastructure and natural resources than they give back to the immediately surrounding community
How did you calculate this. Show your work. Pretty sure if someone made EQ SY1 SY2 and SY3 disappear, the local community, the distant community, communities all over the planet would be negatively affected.
>When a private company can construct what is essentially a new energy city with no people and no elected representation, and do this dozens of times a year across a nation to the point that half a century of national energy policy suddenly gets turned on its head and nuclear reactors are back in style
To take the overwrought disproportionate emotive language out of this.
"How are private entities allowed to build big things I dont like, including power sources I dont like"
The answer is that many people are allowed to do things you don't approve of. This is normal. This is society. Not everything needs the approval of the blogerati. Such a world would be horrific.
>when the infrastructure that powers AI becomes more valuable than the AI itself, when the people who control that infrastructure hold more sway over policy and resources than elected governments.
Show your working. How are the infrastructure providers going to run the government? I believe historically big infrastructure projects tend to die, require some government inducements and then go away. People had similar misgivings about the railroads in the US, in fact it was a big bugbear for henry george I believe. Is Amtrak secretly pulling the strings of the US Deep State? If the US Government is weak to private interests, thats up to the good burghers of yankistan to correct at the polls. If electoral politics dont work, then other means seppos find scary might be required. Freaking out about AI investment seems like a weird place to suddenly be concerned about this.
See Also: AT&T Long Lines, Hydro Electric Dams, Nuclear Energy, Submarine Cable Infrastructure. If Political power comes from owning infrastructure we should be more worried about like, Hurricane Electric. Its demonstrable that people who build big infra dont run the planet. Heck Richest Man and Weird Person Darling Elon Musk doesn't honestly command much infrastructure, he mostly just lives on hype and speculation.
>but I’m really just following the money and the power to their logical conclusion.
The more you need to invoke "Logical conclusion" the less geniune and logical the piece reads.
>Maybe AI will do everything humans do. Maybe it will usher in a new society defined by something other than the balancing of labor units and wealth units. Maybe AGI — these days defined as a general intelligence that exceeds human kind in all contexts — will emerge and “justify” all of this. Maybe.
Probably things will continue on as they always have, but the planet will have more datacenter capacity. Likely, if the AI bubble does burst, datacenter capacity will be cheaper.
>The market concentration and incestuous investment shell game is real.
Yes? And that will probably explode and we will see AI investors jumping out of buildings. nVidia is in a position right now to underwrite big AI Datacentre loans, which could completely offset the huge gains they have made. What about it. Again, you demonstrate nothing.
>The infrastructure is real. The land deals are real.
Yes. Remember to put 2 truths before your lie.
>The resulting shifts in power are real.
So far they exist in your mind.
>we will find ourselves citizens of a very new kind of place that no longer feels like home.
Reminds me of an old argument that a raving white supremacist used to push on me. That "justice" as he defined it, was that society not change so old people wont be scared by it. That having a new (possibly browner) person running the local store was tantamount to and justification for genocide.
Change is a constant. That change making you sad is not in and of itself a bad thing. Please adjust accordingly.
> I think that what is really behind the AI bubble is the same thing behind
> most money, power, and influence: land and resources. The AI future that is
> promised, whether to you and me or to the billionaires, requires the same
> thing: lots of energy, lots of land, and lots of water. Datacenters that
> outburn cities to keep the data churning are big, expensive, and have to be
> built somewhere. [...] When the list of people who own this property is as
> short as it is, you have a very peculiar imbalance of power that almost
> creates an independent nation within a nation. Globalism eroded borders by
> crossing them, this new thing — this Privatism — erodes them from within.
In my opinion, this is an irrationally optimistic take. Yes, of course, building private cities is a threat to democratic conceptions of a shared political sphere, and power imbalances harm the institutions that we require to protect our common interests.But it should be noted that this "privatism" is nothing new - people have always complained about the ultra-wealthy having an undue influence on politics, and when looking at the USA in particular, the current situation - where the number of the ultra-wealthy is very small, and their influence is very large - has existed before, during the Gilded Age. Robber barons are not a novel innovation of the 21st century. That problem has been studied before, and if it was truly just about them robber barons, the old solutions - grassroots organization, economic reform and, if necessary, guillotines - would still be applicable.
The reason that these solutions work is that even though Mark Zuckerberg may, on paper, own and control a large amount of land and industrial resources, in practice, he relies on societal consent to keep that control. To subdue an angry mob in front of the Meta headquarters, you need actual people (such as police) to do it for you - and those people will only do that for you for as long as they still believe either in your doing something good for society, or at least believe in the (democratic) societal contract itself. Power, in the traditional sense, always requires legitimization; without the belief that the ultra-powerful deserve to be where they are, institutions will crumble and finally fail, and then there's nobody there to prevent a bunch of smelly new-age Silicon Valley hippies from moving into that AI datacenter, because of its great vibrations and dude, have you seen those pretty racks, I'm going to put an Amiga in there, and so on.
However, again, I believe this to be irrationally optimistic. Because this new consolidation of power is not merely over land and resources by means of legitimized violence, it's also about control over emerging new technologies which could fundamentally change how violence itself is excercised. Palantir is only the first example to come to mind of companies that develop mass surveillance tools potentially enabling totalitarian control in an unprecedented scale. Fundamentally, all the "adtech" companies are in the business of constructing surveillance machines that could not only be used to predict whether you're in the market for a new iPhone or not, but also to assess your truth to party principles and overall danger to dear leader. Once predictive policing has identified a threat, of course, "self-driving", embodied autonomous systems could be automatically dispatched to detain, question or neutralize it.
So why hasn't that happened yet? After all, Google has had similar capabilities for decades now, why do we still not go to our knees before weaponized DJI drones and swear allegiance to Larry Page? The problem, again, is one of "alignment" - for the same reason that police officers will not shoot protesters when the state itself has become illegitimate, "Googlers" will refuse to build software that influences election results, judges moral character or threatens bodily harm. What's worse, even if tech billionaires would find a small group of motivated fascist engineers to build those systems for them, they could never go for it, as the risk of being found out is way too severe: remember, their power (over land and resources) relies on legitimacy; that legitimacy would instantly be shaken if there was a plausible leak of plans to turn America into a dystopian surveillance state.
What you would really need to build that dystopian surveillance state, then, is agents that can build software according to your precise specifications, whose aligment you can control, that will follow your every order in the most sycophantic manner, and that are not capable of leaking what you are doing to third parties even when they do see that what they're doing is morally questionable.
I could be wrong, this could be nonsense. I just can't make sense of it.
If you tell me, though, that "We installed AI in a place that wasn't designed around it and it didn't work" you're essentially complaining that your horse-drawn cart broke when you hooked it up to your HEMI. Of course it didn't work. The value proposition built around the concept of long dev cycles with huge teams and multiple-9s reliability deliverables is not what this stuff excels at.
I have churned out perfectly functional MVPs for tens of projects in a matter of weeks. I've created robust frameworks with >90% test coverage for fringe projects that would never have otherwise gotten the time budget allotted to them. The boundaries of what can be done aren't being pushed up higher or down deeper, they're being pushed out laterally. This is very good in a distributed sense, but not so great for business as usual - we've had megacorps consolidating and building vertically forever and we've forgotten what it was like to have a robust hacker culture with loads of scrappy teams forging unbeaten paths.
Ironically, VCs have completely missed the point in trying to all build pickaxes - there's a ton of mining to do in this new space (but the risk profile makes the finance-pilled queasy). We need both.
AI is already very good at some things, they just don't look like the things people were expecting.