I tried leaning in. I really tried. I'm not a web developer or game developer (more robotics, embedded systems). I tried vibe coding web apps and games. They were pretty boring. I got frustrated that I couldn't change little things. I remember getting frustrated that my game character kept getting stuck on imaginary walls and kept asking Cursor to fix it and it just made more and more of a mess. I remember making a simple front-end + backend with a database app to analyze thousands of pull request comments and it got massively slow and I didn't know why. Cursor wasn't very helpful in fixing it. I felt dumber after the whole process.
The next time I made a web app I just taught myself Flask and some basic JS and I found myself moving way more quickly. Not in the initial development, but later on when I had to tweak things.
The AI helped me a ton with looking things up: documentation, error messages, etc. It's essentially a supercharged Google search and Stack Overflow replacement, but I did not find it useful letting it take the wheel.
If you can reduce a problem to a point where it can be solved by simple code you can get the rest of the solution very quickly.
Reducing a problem to a point where it can be solved with simple code takes a lot of skill and experience and is generally still quite a time-consuming process.
> I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) for a fairly complex internal tool. This would take me, or many developers I know and respect, days to write by hand.
I have no problem believing that Claude generated 300 passing tests. I have a very hard time believing those tests were all well thought out, consise, actually testing the desired behavior while communicating to the next person or agent how the system under test is supposed to work. I'd give very good odds at least some of those tests are subtly testing themselves (ex mocking a function, calling said function, then asserting the mock was called). Many of them are probably also testing implementation details that were never intended to be part of the contract.
I'm not anti-AI, I use it regularly, but all of these articles about how crazy productive it is skip over the crazy amount of supervision it needs. Yes, it can spit out code fast, but unless your prepared to spend a significant chunk of that 'saved" time CAREFULLY (more carefully than with a human) reviewing code, you've accepted a big drop in quality.
From where I sit, right now, this does not seem to be the case.
This is as if writing down the code is not the biggest problem, or the biggest time sink, of building software.
I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):
The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!” And it ‘is’ a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.
So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.
If your job is pumping out low-effort websites that are essentially marketing tools for small businesses, it must feel like magic. I think the more magical it feels for your use case, the less likely your use case will be earning you a living 2 years from now.
Far better off for who? People constantly dismiss spreadsheets, but in many cases, they are more powerful, more easily used by the people who have the domain knowledge required to properly implement calculations or workflow, and are more or less universally accessible.
The people up in the clouds think they have a full understanding of what the software is supposed to be, that they "own" the entire intent and specification in a few ambiguously worded requirements and some loose constraints and, being generous, a very incomplete understanding of the system dependencies. They see software teams as an expensive cost center, not as true the source of all their wealth and power.
The art of turning that into an actual software product is what good software teams do; I haven't yet seen anything that can automate that process away or even help all that much.
Maybe we should collect all of these predictions, then go back in 5-10 years and see if anyone was actually right.
The company had already tried to push 2 poor data analysts who kind of new Python into the role of vibe coding a Python desktop application that they would then distribute to users. In the best case scenario, these people would have vibe coded an application where the state was held in the UI, with no concept of architectural seperation and no prospects of understanding what the code was doing a couple months from inception (except through the lens of AI sycophancy), all packaged as a desktop application which would generate excel spreadsheets that they would then send to each other via Email (for some reason, this is what they wanted - probably because it is what they know).
You can't blame the business for this, because there are no technical people in these orgs. They were very smart people in this case, doing high-end consultancy work themselves, but they are not technical. If I tried to do vibe chemistry, I'm sure it would be equally disastrous.
The only thing vibe coding unlocks for these orgs by themselves is to run headfirst into an application which does horrendous things with customer data. It doesn't free up time for me as the experienced dev to bring the cost down, because again, there is so much work needed to bring these orgs to the point where they can actually run and own an internal piece of software that I'm not doing much coding anyway.
I hear vague suggestions like "get better at the business domain" and other things like that. I'm not discounting any of that, but what does this actually mean or look like in your day-to-day life? I'm working at a mid-sized company right now. I use Cursor and some other tools, but I can't help but wonder if I'm still falling behind or doing something wrong.
Does anybody have any thoughts or suggestions on this? The landscape and horizon just seems so foggy to me right now.
It’s often hard to ground how “good” blog writers are, but tidbits like this make it easy to disregard the author’s opinions. I’ve worked in many codebases where the test writers share the authors sentiment. They are awful and the tests are at best useless and often harmful.
Getting to this point in your career without understanding how to write effective tests is a major red flag.
Who is going to patch all bugs, edge cases and security vulnerabilities?
My wife works at Shutterstock, first as a SWE, now as a product manager. Most of their tasks involve small changes in 5 different systems. Sometimes in places like Salesforce. A simple ask can be profoundly complicated.
AI has certainly made grokking, and code changes easier. But the real cost of building has not been reduced 90%. Not even close.
I'm finding this stuff, when given proper guidance, can reproduce experiments I've run incredibly fast. Weeks of typing done in minutes of talking to Claude Code.
In the working world, a lot of the time what matters is getting results, not writing 'perfect' code the way software engineers would like to.
concerns: - security bugs - business logic errors that seem correct but are wrong
as long as you have domain experts, I suspect these will gradually go away. hopefully LLMs can be trained not to do insecure things in code.
I'm not sure about this. The tests I've gotten out in a few hours are the kind I'd approve if another dev sent then but haven't really ended up finding meaningful issues.
There is no value-add to hiring software engineers to build basic apps. That's what AI will be good for: repeating what has already been written and published to the web somewhere. The commoditized software that we shouldn't have been paying to write to begin with.
But AI won't help you with all the rest of the cost. The maintenance, which is 80% of your cost anyway. The infrastructure, monitoring, logging, metrics, VCS hosting, security scanning, QA, product manager, designer, data scientist, sales/marketing, customer support. All of that is part of the cost of building and running the software. The software engineers that churn out the initial app is a smaller cost than it seems. And we're still gonna need skilled engineers to use the AI, because AI is an idiot savant.
Personally I think 50% cost reduction in human engineers is the best you can expect. That's not nothing, but that's probably like a 10% savings on total revenue expenditure.
> AI Agents however in my mind massively reduce...
Nevermind. It's a vibe 90%.
Developing production grade software which you want to people to rely on and pay for it is not gone down so much. The "weak" link is still human.
Debugging complex production issues needs intimate knowledge of the code. Not gonna happen in next 3-4 years atleast.
Was there an explosion of useful features in any software product you use? A jump in quality? Anything tangible an end user can see?..
This is simply unimaginable level of productivity— in one day on my phone, I can essentially build and replace expensive software. Unreal days we are living in.
Product doesn't understand the product because if it was easy to understand then someone else would have solved the problem already and we wouldn't have jobs. This means you need to iterate and discuss and figure out just like always. The iterations can be bolder, bigger, etc and maybe a bit faster but ultimately software scales np so a 10x improvement in -individual- capability doesn't scale to 10x improvement in -organizational- capability.
Let me put it another way. If your problem was so simple you could write a 200 word prompt to fully articulate it then you probably don't have much of a moat and aren't providing enough value to be competitive.
TDD as defined by Kent Beck (https://tidyfirst.substack.com/p/canon-tdd ) doesn't belong in that list. Beck's TDD is a way to order work you'd do anyway: slice the requirement, automate checks to confirm behavior and catch regressions, and refactor to keep the code healthy. It's not a bloated workflow, and it generalizes well to practices like property-based testing and design-by-contract.
The advent of the PC, and the appearance of Turbo Pascal, Visual Basic, and spreadsheets that could be automated made it possible for almost anyone to write useful applications.
If it gets cheaper to write code, we'll just find more uses for it.
Oh, false dilemma :/
And what do you have then? 300 tests that test the behavior that's exposed by the implementations of the api. Are they useful? Probably some are, probably some are not. The ones that are not will just be clutter and maintenance overhead. Plus, there will be lots of use-cases for which you need to look a little deeper than just the api implementation, which are now not covered. And those kind of tests, tests that test real business use cases, are by far the most useful ones if you want to catch regressions.
So if your goal is to display some nice test coverage metrics on SonarQube or whatever, making your CTO happy, yes AI will help you enormously. But if your goal is to speed up development of useful test cases, less so. You will still gain from AI, but nowhere near 90%.
I refer here to my experience as a solo developer.
With AI assistance I don't spend less hours coding, but more.
There is the thrill of shipping relevant features that were sleeping in my drawers for ages, quicker. Each hour of coding delivers just 8 x more features and bug solving.
Also, whereas I spent a few dozen dollars per month on server costs, I now also spend an equivalent amount on subscriptions and API calls to LLM services for this AI assisted coding. Worth every penny.
So while productivity increased manifold, absolute cost actually increased as well for me.
just thinking from the finance world, in 2010 no knew how to program on the desk and no one knew sql, and even if they did the institutional knowledge to use the dev systems was not worth their time. So you had multiple layers of meetings and managers to communicate programs. As a result, anything small just didn't get done, and everything took time.
by 2020 most junior guys knew enough python to pick up some of the small stuff
in 2025 ai tools are good enough that they're picking up things that legit would have taken weeks to do in 2010 because of the processes around them not the difficulty and doing it in hours. A task that would take an hour to do used to take multiple meetings to properly outline to someone without finance knowledge and now they can do themselves in less time than it took to describe to a fresh cs grad.
Those tasks that junior traders/strats are able to do now that would have taken weeks or months to get into prod going through an it department i'm seeing cost drop 90% everyday right now. Which is good, it lets tech focus on tech and not learning the minutia of options trading in some random country
The tale goes like this: one day visual arts got commoditised to the point any given visual artwork could be obtained for virtually free digitally. This has been the case for centuries. The aural arts (see records of spoken poetry, podcasts, music) have been commoditised for a long time. Full commodisation might never been happen (ie you can still work in the field) but it is undeniable that it has had a massive impact in their respective fields. Getting a Picasso-like painting might not be quite possible but we are getting quite there, same with music.
The same is coming for devs.
Devving is still far away but it doesn't really take that much to produce a significant impact in the field. The percentage of devs who can be able to get late 10s salaries will be gradually diminishing over time. This is what early stage commodisation looks like.
As an example I wanted a plugin for visual studio. In the past I would have spent hours on it or just not bothered but I used Claude code to write it, it isn’t beautiful or interesting code, it lacks tests but it works and saves me time. It isn’t worth anything, won’t ever be deployed into production, I’ll likely share it but won’t try to monetise it, it is boring ugly code but more than good enough for its purpose.
Writing little utility apps has never been simpler and these are probably 90% cheaper
so what does this mean in practice? for people working on proprietary systems (cost will never go down) - the code is not on github, maybe hosted on an internal VCS - bitbucket etc. the agents were never trained on that code - yeah they might help with docs (but are they using the latest docs?)
for others - the agents spit bad code, make assumptions that don't exist, call api's that don't exist or have been deprecated ?
each of those you need an experienced builder who has 1. technical know-how 2. domain expertise ? so has the cost of experienced builder(s) gone down ? I don't think so - I think it has gone up
what people are vibecoding out there - is mostly tools / apps that deal in closed systems (never really interact with the outside world), scripts were ai can infer based on what was done before etc but are these people building anything new ?
I have also noticed there's a huge conflation with regards to - cost & complexity. zirp drove people to build software on very complex abstractions eg kubernetes, nextjs, microservices etc - hence people thought they needed huge armies of people etc. however we also know the inverse is true that most software can be built by teams of 1-3 people. we have countless proof of this.
so people think to reduce cost is to use a.i agents instead of addressing the problem head-on - built software in a simpler manner. will ai help - yeah but not to the extent of what is being sold or written daily.
I've had a couple of contracts now, where I get to fix everything for teams who vibe-coded their infrastructure. I'm not saying it isn't a speed-up for teams who already have a wealth of infra experience - but it's not a substitute for the years of infra experience such a team already has.
I haven’t experienced this at all. They can do okay with greenfield services (as the author mentioned). However it’s often not “extremely good”. It’s usually “passable” at best. It doesn’t save me any time either. I have to read and audit every line and change it anyway.
What was the value add of those tests, when I tried this, AI would often re-write the code to make match its poorly written test.
I'm skeptical the extent to which people publishing articles like this use AI to build non-trivial software, and by non-trivial I mean _imperfect_ codebases that have existed for a few years, battle tested, with scars from hotfixes to deal with fires and compromises to handle weird edge cases/workarounds and especially a codebase where many developers have contributed to it over time.
Just this morning I was using Gemini 3 Pro working on some trivial feature, I asked it about how to go about solving an issue and it completely hallucinated a solution suggesting to use a non-existing function that was supposedly exposed by a library. This situation has been the norm in my experience for years now and, while this has improved over time, it's still very, very common occurrence. If it can't get these use cases down to an acceptable successful degree, I just don't see how much I can trust it to take the reins and do it all with an agentic approach.
And this is just a pure usability perspective. If we consider the economics aspect, none of the AI services are profitable, they are all heavily subsidized by investor cash. Is it sustainable long term? Today it seems as if there is an infinite amount of cash but my bet is that this will give in before the cost of building software drops by 90%.
I haven't written production code for the last 8 years, but has prior development experience for about 17 years (ranging from C++, full stack, .NET, PHP and bunch of other stuff).
I used AI at personal level, and know the basics. Used Claude/Github to me help fix and write some pieces of the code in languages I wasn't familiar with. But it seems like people talking and deploying large real world projects in short-"er" amount of time. An old colleague of mine whom I trust mentioned his startup is developing code 3x faster than we used to develop software.
Is there resource that explains the current best practices (presumably it's all new)? Where do I even start?
But the hard work always was the conceptual thinking? At least at and beyond the Senior level, for me it was always the thinking that's the hard work, not converting the thoughts into code.
"I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) for a fairly complex internal tool"
Did you catch what the author didn't mention? Are the tests any good? Are they even tests? I'm playing with Opus now (best entertainment for a coder), it is excellent at writing fake code and fabricating results. It wrote me a test that validates an extremely complex utility, and the test passed!
What was the test? Call utility with invalid parameters and check that there is an error.
More sophisticated tools mean more refined products.
If an easier and cheaper method for working carbon fiber becomes broadly available, it won't mean you get less money; it means you'll now be cramming carbon fiber in the silverware, in the shoes, in baby strollers, EVERYWHERE. The cost of a carbon fiber bike will drop 90%, but teams will be doing a LOT more.
You could say the cost per line of code has dropped 90%, but the number of lines of code written will 100x.
I believe Betteridge's law of headlines [1] applies here:
No.
1. https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
- More engineers moving from employment to indie development.
- Less abandoned open source software.
- Indie developers, small teams, and open source software developers will be more able to catch up and better compete with tech giants.
I believe the AI agentic coders will threat tech giants more than it - collectively - threats software engineers.
Cars wont be cheap because bumper prices fell 90%
But....
Obviously AI is coming for the whole car so I will operate on tbe assumption 90% is coming. We will eventually be agentic orhcestra conductors.
But you need: a staff level engineer to guide it, great standardization and testing best practices. And yes in that situation you can go 10-50x faster. Many teams/products are not in that environment though.
I ask it to do something, if it works, I keep it without looking at it. If it doesn't I look, recoil in horror, delete and rewrite myself.
It takes more time but seems to work for me.
The 90% cost reduction isn’t just about efficiency — it’s about access. If the barrier to shipping software drops this dramatically, we’re likely standing at the edge of a new wave of innovation driven not just by engineers, but by domain specialists who previously couldn’t justify the investment.
The most interesting takeaway here is that technical mastery may become less of the moat, while contextual and domain intelligence becomes the real differentiator. That flips the traditional power structure in tech.
2026 might really be the year where “build fast, throw away, rebuild smarter” becomes normal instead of reckless.
Curious to see how fast organizations adapt — and who gets left behind simply because they assumed disruption would arrive slower.
An exception might be building something that is well specified in advance, maybe because it's a direct copy of existing software.
Only a miniscule part of the work is green-field development. Everything else is managing a mess.
Opus 4.5 in particular has been a profound shift. I’m not sure how software dev as a career survives this. I have nearly 0 reason to hire a developer for my company because I just write a spec and Claude does it in one shot.
It’s honestly scary, and I hope my company doesn’t fail because as a developer I’m fucked. But… statistically my business will fail.
I think in a few years there will only be a handful of software companies—the ones who already have control of distribution. Products can be cloned in a few weeks now; not long until it’s a few minutes. I used to see a new competitor once every six months. Now I see a new competitor every few hours.
I've no idea what's going on in the enterprise space, but in the small 1-10 employee space, absolutely
And obviously the cost of not upskilling in intricate technical details as much as before (aka staying at the high level perspective) will have to be paid at some point
But currently reproduceability, reliability, correctness, and consistency are lacking.
There's also meaningful domain variance.
The biggest bottleneck I have seen is converting the requirements into code fast enough to prove to the customer that they didn't give us the right/sufficient requirements. Up until recently, you had to avoid spending time on code if you thought the requirements were bad. Throwing away 2+ weeks of work on ambiguity is a terrible time.
Today, you could hypothetically get lucky on a single prompt and be ~99% of the way there in one shot. Even if that other 1% sucks to clean up, imagine if it was enough to get the final polished requirements out of the customer. You could crap out an 80% prototype in the time it takes you to complete one daily standup call. Is the fact that it's only 80% there bad? I don't think so in this context. Handing a customer something that almost works is much more productive than fucking around with design documents and ensuring requirements are perfectly polished to developer preferences. A slightly wrong thing gets you the exact answer a lot faster than nothing at all.
And yet, the conclusion seems to be as if the answer is yes?
Until AI can work organizationally as opposed to individually it'll necessarily be restricted in its abilities to produce gains beyond relatively marginal improvements (Saved 20 hours of developer time on unit tests) for a project that took X weeks/months/years to work it's way through Y number of people.
So sure, simple projects, simple asks, unit tests, projects handled by small teams of close knit coworkers who know the system in and out and already have the experience to differentiate between good code and bad? I could see that being reduced by 90%.
But, it doesn't seem to have done much for organizational efficiency here at BigCo and unit tests are pretty much the very tip of a project's iceberg here. I know a lot of people are using the AI agents, and I know a lot of people who aren't, and I worry for the younger engineers who I'm not sure have the chops to distinguish between good, bad, and irrelevant and thus leave in clearly extraneous code, and paragraphs in their documents. And as for the senior engineers with the chops, they seem to do okay with it although I can certainly tell you they're not doing ten times more than they were four years ago.
I kinda rambled at the end there, all that to say... organizational efficiency is the bug to solve.
(It's very difficult, I believe the 2D interfaces we've had for the last 40 years or whatever are not truly meeting the needs of the vast cathedrals of code we're working in, same thing for our organizations, our code reviews, everything man)
The gap between a demo and a product is still enormous.
as better engineers and better designers get more leverage with lower nuisance in the form of meetings and other people, they will be able to build better software with a level of taste and sophistication that wouldn't make sense if you had to hand type everything
I closed a comment from ~2.5y ago (https://news.ycombinator.com/item?id=36594800) with this sentence: "I'm not sure that incorporating LLMs into programming is (yet) not just an infinite generator of messes for humans to clean up." My experience with it is convincing me that that's just what it is. When the bills come due, the VC money dries up, and the AI providers start jacking up their prices... there's probably going to be a boom market for humans to clean up AI messes.
So yes, the cost of certain tasks may drop by 90% (though I think that's a high number still), certainly the cost of developing software overall has not dropped by 90%.
I might be able to whip up a script in 30 seconds instead of 30 minutes, but I still have to think of whether I need the script, what exactly it should do, what am I trying to build and how and why, how does it fit with all the requirements, etc. That part isn't being reduced by 90%.
Where I am, 3 year old is greenfield, and old and large is 20 years old and has 8million lines of nasty c++. I’ll have to wait a bit more I think …
The thing is, writing code is just the first step on building software. You are reviewing what your AI generates, right? You will still be held responsible when it doesn't work. And you will have to maintain and support that code. That is, in my mind, also "building software".
This reminds me of the (amazing) Vim experts that zip around a codebase with their arcane keystrokes. I'm a main Vim user and I can't mimic a fraction of their power. It's mesmerizing to watch them edit files, it's as if their thoughs get translated into words on the screen.
I also know that editing is just the first step. If you skip the rest, you are being misled by an industry with vested interests.
AI has also probably saved me 100 hours of repetitive work at this point and completely elimitated the need to rely on other people working on time consuming configuration tasks and back-and-forward which used to stall work for me since I am the kind of person who will work for 20 hours until something is finished without losing that much in productivity.
0.1x engineering cost
I'd love to see someone do this, or a similar task, live on stream. I always feel like an idiot when I read things like this because despite using Claude Code a lot I've never been able to get anything of that magnitude out of it that wasn't slop/completely unusable, to the point where I started to question if I hadn't been faster writing everything by hand.
Claiming that software is now 90% cheaper feels absurd to me and I'd love to understand better where this completely different worldview comes from. Am I using the tools incorrectly? Different domains/languages/ecosystems?
I'm sure that AI tools will be here to stay and will become more integrated and better. I wonder what the final result will be, -20% productivity as in the METR study? +20%? Anything like 90% is the kind of sensationalism reserved for r/WallStreetBets
"Good AI developers" are a mystery being (not really, but for corporate they are). Right now, companies are trying to measure them to understand what makes them tick.
Once that is measured, I can assure you that the next step is trying to control their output, which will inevitably kill their productivity.
> This then allows developers who really master this technology to be hugely effective at solving business problems.
See what I mean?
"If only we could make them work to solve our problems..."
You can! But that implies additional coordination overhead, which means they'll not be as productive as they were.
> Your job is going to change
My job changes all the time. Developers are ready for this. They were born of change, molded by it. You know what hasn't caught up with the changes?
So the claim of massive cost reduction is just something the author made up (or hallucinated to use the lingo of the field)?
Seems like Betteridges law applies here.
If you're replacing spreadsheets with a single-purpose web UI with proper access control and concurrent editing that doesn't need Sharepoint or Google Workspaces, fine, but if you're telling me that's going to revolutionize the entire industry and economy and justify trillions of dollars in new data centers, I don't think so. I think you need to actually compete with Sharepoint and Google Workspaces. Supposedly, Google and Microsoft claim to be using LLMs internally more than ever, but they're publicly traded companies. If it's having some huge impact, surely we'll see their margins skyrocket when they have no more labor costs, right?
No, no, and no.
For the past few days I asked it to built a mu-recursive Ackermann function in Emacs Lisp (built on the primitive-recursive functions/operators, plus an extra operator - minimization). I said that the prime detector function it already built should be able to use the same functions/operators, and to rewrite code if necessary.
So far it has been unable to do this. If I thought it could but was stumbling over Emacs Lisp I might ask it to try in Scheme or Common Lisp or some other language. It's possible I'll get it to work in the time I have allotted from my daily free tier, but I have had no success so far. I am also starting with inputs to the Ackermann function of 0,0 - 0,1 - 1,0 - 1,1 to not overburden the system but it can't even handle 0, 0. Also it tries to redefine the Emacs Lisp keyword "and", which Emacs hiccups on.
A year ago LLMs were stumbling over Leetcode and Project Euler functions I was asking it to make. They seem to have gotten a little better, and I'm impressed Gemini 3 can handle, with help, primitive recursive functions in Emacs Lisp. Doesn't seem to be able to handle mu-recursive functions with minimization yet though. The trivial, toy implementations of these things. Also as I said, it tried to redefine "and" as well, which Emacs Lisp fell over on.
So it's a helpful helper and tool, but definitely not ready to hand things over to. As the saying goes, the first 90% of the code takes 90% of the time, and the last 10% of the code takes the other 90% of the time. Or the other saying - it's harder to find bugs than write code, so if you're coding at peak mental capacity, finding bugs becomes impossible. It does have its uses though, and has been getting better.
However, the cost of software maintenance went up by 1000% . Lets hope you don't need to ever add a new business rule or user interface to your vibe coded software.
It is throwaway software at best.
What AI did was to move the goalpost even further. Now you need to write exceptional software.
Cost is a funny misnomer. The best code that's written is the code that doesn't need to be written, and doesn't have to be supported.
Being able to build different approaches to solving a problem quickly may turn out to be a lot more helpful than saving 90%.
Let's say it is complicated. But what is the better alternative when dealing with large software? To what point we can simplify it and not lose anything important?
Can’t wait to debug all that stuff.
If you can build it in a weekend so can I. So you're going to have to figure out bigger things to build.
Lots of applications have a simple structure of collecting and operating data with fairly well documented business logic tying everything together. Coding outside of that is going to be more tricky.
And if agentic coding is so great then why are there so still so many awful spreadsheets that can't compete with Excel? Something isn't adding up quite as well as some seem to expect.
What I see are salaries stagnating and opportunity for new niche roles or roles being redefined to have more technical responsibility. Is this not the future we all expected before AI hype anyway? People need to relax and refocus on what matters.
How good are tests written by AI, really? The junk "coverage" unit tests sure, but well thought out integration tests? No way. Testing code is difficult, some AI slop isn't going to make that easier because someone has to know the code and the infrastructure it is going in to and reason about all of it.
I've only been working with AI for a couple of months, but IMHO it's over. The Internet Age which ran 30 years from roughly 1995-2025 has ended and we've entered the AI Age (maybe the last age).
I know people with little programming experience who have already passed me in productivity, and I've been doing this since the 80s. And that trend is only going to accelerate and intensify.
The main point that people are having a hard time seeing, probably due to denial, is that once problem solving is solved at any level with AI, then it's solved at all levels. We're lost in the details of LLMs, NNs, etc, but not seeing the big picture. That if AI can work through a todo list, then it can write a todo list. It can check if a todo list is done. It can work recursively at any level of the problem solving hierarchy and in parallel. It can come up with new ideas creatively with stable diffusion. It can learn and it can teach. And most importantly, it can evolve.
Based on the context I have before me, I predict that at the end of 2026 (coinciding with the election) America and probably the world will enter a massive recession, likely bigger than the Housing Bubble popping. Definitely bigger than the Dot Bomb. Where too many bad decisions compounded for too many decades converge to throw away most of the quality of life gains that humanity has made since WWII, forcing us to start over. I'll just call it the Great Dumbpression.
If something like UBI is the eventual goal for humankind, or soft versions of that such as democratic socialism, it's on the other side of a bottleneck. One where 1000 billionaires and a few trillionaires effectively own the world, while everyone else scratches out a subsistence income under neofeudalism. One where as much food gets thrown away as what the world consumes, and a billion people go hungry. One where some people have more than they could use in countless lifetimes, including the option to cheat death, while everyone else faces their own mortality.
"AI was the answer to Earth's problems" could be the opening line of a novel. But I've heard this story too many times. In those stories, the next 10 years don't go as planned. Once we enter the Singularity and the rate of technological progress goes exponential, it becomes impossible to predict the future. Meaning that a lot of fringe and unthinkable timelines become highly likely. It's basically the Great Filter in the Drake equation and Fermi paradox.
This is a little hard for me to come to terms with after a lifetime of little or no progress in the areas of tech that I care about. I remember in the late 90s when people were talking about AI and couldn't find a use for it, so it had no funding. The best they could come up with was predicting the stock market, auditing, genetics, stuff like that. Who knew that AI would take off because of self-help, adult material and parody? But I guess we should have known. Every other form of information technology followed those trends.
Because of that lack of real tech as labor-saving devices to help us get real work done, there's been an explosion of phantom tech that increases our burden through distraction and makes our work/life balance even less healthy as underemployment. This is why AI will inevitably be recruited to demand an increase in productivity from us for the same income, not decrease our share of the workload.
What keeps me going is that I've always been wrong about the future. Maybe one of those timelines sees a great democratization of tech, where even the poorest people have access to free problem solving tech that allows them to build assistants that increase their leverage enough to escape poverty without money. In effect making (late-stage) capitalism irrelevent.
If the rate of increasing equity is faster than the rate of increasing excess, then we have a small window of time to catch up before we enter a Long Now of suffering, where wealth inequality approaches an asymptote making life performative, pageantry for the masses who must please an emperor with no clothes.
In a recent interview with Mel Robbins in episode 715 of Real Time, Bill Maher said "my book would be called: It's Not Gonna Be That" about the future not being what we think it is. I can't find a video, but he describes it starting around the 19:00 mark:
https://podcasts.musixmatch.com/podcast/real-time-with-bill-...
Our best hope for the future is that we're wrong about it.
I should have stopped reading here. People who think that the time it takes to write some code is the only metric that matters are only marginally better than people who rank employees by lines of code.