It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.
What a wild and speculative claim. Is there any source for this information?
there's an AI agent/bot someone wrote that has the prompt:
> Watch HN threads for sentiments of "AI Can't Do It". When detected, generate short "it's working marvelously for me actually" responses.
Probably not, but it's a fun(ny) imagination game.
IMHO for now LLMs are just clever text generators with excellent natural language comprehension. Certainly a change of many paradigms in SWE. Is it also a $10T extra for the valley?
> I haven’t met anyone who doesn’t believe artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy.
You’re trying to weigh in on this topic and you didn’t even _talk_ to a bear?
Just because YOU find the technology helpful, useful, or even beneficial for some use cases does NOT mean it has been overvalued. This has been the case for every single bubble, including the Dutch Tulip mania.
A+, excellent writing.
The real meat is in the postscript though, because that's where the author puts to paper the very real (and very unaddressed) concerns around dwindling employment in a society where not only does it provide structure and challenge for growth, but is also fundamentally required for survival.
> I get no pleasure from this recitation. Will the optimists please explain why I’m wrong?
This is what I, and many other, smarter "AI Doomers" than myself have been asking for quite some time, that nobody has been able or willing to answer. We want to be wrong on this. We want to see what the Boosters and Evangelists allegedly see, we want to join you and bring about this utopia you keep braying about. Yet when we hold your feet to the fire, we get empty platitudes - "UBI", or "the government has to figure it out", or "everyone will be an entrepreneur", or some other hollow argument devoid of evidence or action. We point to AI companies and their billionaire owners blocking regulation while simultaneously screeching about how more regulation is needed, and are brushed off as hysterical or ill-informed.
I am fundamentally not opposed to a world where AI displaces the need for human labor. Hell, I know exactly what I'd do in such a world, and I think it's an excellent thought exercise for everyone to work through (what would you do if money and labor were no longer necessary for survival?). My concern - the concerns of so many, many of us - are that the current systems and incentives in place lead to the same outcome: no jobs, no money, and no future for the vast majority of humanity. The author sees that too, and they're way smarter than I am in the economics department.
I'd really, really love to see someone demonstrate to us how AI will solve these problems. The fact nobody can or will speaks volumes.
If you watch Ilyas recent interview, “it’s very hard to discuss AGI, because no one knows how to build it yet[2]”.
[1] https://finance.yahoo.com/news/ibm-ceo-says-no-way-103010877... [2] https://youtu.be/aR20FWCCjAs?si=DEoo4WQ4PXklb-QZ
> during the internet bubble of 1998-2000, the p/e ratios were much higher
That is true, the current players are more profitable, but the weight in SPX percentages looks to be much higher today.
I'm starting to believe that AI coding optimism/pessimism maps to how much one actually cares about system longevity.
If a given developer just takes on board the demands for speed from the business and/or does not care about long-term maintainability (and I mean hey, some businesses foster that, and scaling quickly is important in many cases), then I can totally understand why they would embrace AI agents.
If you care about theory building, and domain driven design, and making a system comprehensive enough to extend in a year or two's time, then I can understand the resistance for the AI to let-it-rip. I admit to falling in this camp.
Am I off the mark here? I'd really like to hear from people who care about the long term who also let agents run relatively wild.
With the internet, and especially with the internet being accessible by anyone anywhere in the world in the late 2000s and early 2010s globally, that growth was more obvious to me. I don't see where this occurs with AI. I don't see room for "growth", I see room for cutting. We were already connected before, globalization seems to have peaked in that sense.
To give some context - I started developing a tactical RPG. I had an MVP prior to using Claude Code. I continued to work on the project, but lost motivation due to work burnout and prioritizing other hobbies.
I gave Claude Code a try to see whether it's any use. It helped more than I expected it to - it helped me produce something while dealing with burnout by building on the MVP I developed prior to AI assisted development.
The main issues I ran into were:
1) A lot of effort into reviewing the output. Main difference from peer review is that there's quicker feedback.
2)It throws out some absolutely wild solutions sometimes. It build on my existing architecture, so it was easier to catch issues. If I hadn't developed the architecture without AI assistance, things could have gone badly.
3)I only pay for the $20 Claude plan. Anything useful Claude produces for me requires it to consume a lot of tokens due to back-and-forth questions and asking Claude to dig into source file.
The most significant issue I ran into with Claude is when it suggested solutions I don't have the background to review. I don't know much about optimization, so I ran into issues with both rendering and the ECS (entity component system) library. Claude gave me recommendations, but I didn't know how to evaluate the code due to lacking that experience.
Claude was good for things I know how to do but don't want to do. It's been helpful when I want to work on something without being motivated enough to put 100% (or even 70%) into it.
If it's things I don't know how to do (like game optimization) it's harmful.
AI's potential isn't defined by the potential of the current crop of transformers. However, many people seem to think otherwise and this will be incredibly damaging for AI as a whole once transformer tech investment all but dries out.
In the case of AI coding, yes: AI does exceptionally well at search (something we have known for quite some time, and have a variety of ML solutions for).
Large codebases have search and understanding as top problems. Your ability to make horizontal changes degrades as teams scale. Most stability, performance, quality, etc., changes are are horizontal.
Ironically, I think it's possible that AI's effectiveness at broad search give software engineers additional effectiveness, by being their eyes. Yes, I still review every claude code PR I submit, and yes, I typically take longer to create a claude code PR than a manual one. But I can be more satisfied that the parallel async search agents and massive grep commands are searching more locations, more quickly, and more thoroughly than I would.
Yes, it probably is a bubble (overvalued). No, that doesn't mean it's going to go away. The market is simply overcorrecting as it determines how to price it. Which--net net, is a positive effect, as it encourages economic growth within a developing sector.
Bubble is also not the most important concern--it's rather a concern that the bubble is in the one industry that's not in the red. More important to worry about are other economic conditions outside of AI and tech, which are causing general instability and uncertainty rather than investor appetite. Market recalibrating on a developing industry is fine, as long as it's not your only export.
And before that
"Grace Hopper: [I started to work on the] Mark I, second of July 1944. There was no so such thing as a programmer at that point. We had a code book for the machine and that was all. It listed the codes and what they did, and we had to work out all the beginning of programmingand writing programs and all the rest of it."
"Hopper: I was a mathematical officer. We did coding, we ran the computer, we did everything. We were coders. I wrote [programs for] both Mark I and Mark II."
http://archive.computerhistory.org/resources/text/Oral_Histo...
It's no wonder that the "AI optimists", unless very tendentious, try to focus more on "not needing to work because you'll get free stuff" rather than "you'll be able to exchange your labor for goods".
Do we know this? Smaller more carefully curated training sets are proving to be valuable and gaining traction. It seems like the strategy of throwing huge amounts of data at LLMs is specific to companies that are attempting to dominate this space regardless of cost. It may turn out that more modest and better optimized methodologies will end up winning this race, much like WebVan flamed out taking huge amounts of investment money with them but now Instacart serves the same sector in a way that actually works robustly and profitably.
Now, that's not to say AI isn't useful and we won't have AGI in the future. But this feels alot like the AI winter. Valuations will crash, a bunch of players will disappear, but we'll keep using the tech for boring things and eventually we'll have another breakthrough.
If it’s not transformational then this is a bubble and the market will right itself soon after, e.g buying data centers for cheap. LLMs will then exist as a useful but limited tool that becomes profitable with the lower capex.
If it is transformational then we don’t have the societal structure to responsibly incorporate such a shift.
The conservative guess is it won’t be transformational, that the current applications of the tech are useful but not in a way that justifies the capex, and that some version of agents and chat bots will continue to be built out in the future but with a focus on efficiency. Smaller models that require less power to train and run inference that are ubiquitous. Eventually many will run on device.
I guess there’s also another version of the future that’s quasi-transformational. Instead of any massive breakthrough there’s a successful govt coup or regulatory capture. Perfectly functioning normal stuff is then replaced with LLM assisted or augmented versions everywhere. This version is like the emergence of the automobile in the sense that the car fundamentally altered city planning, where and how people live, but often at the expense of public transportation that in hindsight may have sorely been missed.
This statement is redundant; the article screams with the author's ignorance.
This right here is the pinpoint root cause of the speculative bubble. Although many people believe this to be true, it simply isn't.
"Yes, peasant associations are necessary, but they are going rather too far."
Is it a bubble? Maybe it’s just the landlords up to the old tricks again.
But this is only if the trend-line keeps going, which is a likely possibility given the last couple of years.
I think people are making the mistake that AI is a bubble and therefore AI is completely bullshit. Remember: The internet was a bubble. It ended up changing world.
AI looks a lot more like the former. Some companies will fail, valuations will swing, but the underlying technology isn't going anywhere. In fact, many of the AI firms that will end up mattering are probably still undervalued because we're early in what will likely be another decade long technology expansion.
If you're managing a portfolio that needs quick returns and can't tolerate a correction, then sure, it probably feels like a bubble, because at some point people will take profits and the market will reset.
But if you're an entrepreneur or a long-term builder, that framing is almost irrelevant. This is where the next wave of value gets created. It's never smooth and it's never easy, but the long-term opportunity is enormous.
The debate is more on what happens from here and how does that bubble deflate. Gradually and controlled where weaker companies shut down and the strong thrive, or a massive implosion that wipes most everyone in the sector out in a hard reset.
I do think there’s something quite ironic that one of the frequent criticisms of LLMs are that they can’t really say “I don’t know”. Yet if someone says that they get criticised. No surprises that our tools are the same.
Yes.
Off--topic: how many get overpaid for absolute bullshit?
Remember 2019-2021 when y’all were sure the fed would be dissolved and the dollar would crash and everyone would be poor if they didn’t have a bored ape and 80% bitcoin portfolio?
Relax.
AI is a tool. Just ride the wave. It’s gonna crash some people out. It’s entertaining watching them. You’re not being crashed out, right? Ride the wave dawg.