I believe vibe coding has always existed. I've known people at every company who add copious null checks rather than understanding things and fixing them properly. All we see now is copious null checks at scale. On the other hand, I've also seen excellent engineering amplified and features built by experts in days which would have taken weeks.
There are cases where a unit test or a hundred aren’t sufficient to demonstrate a piece of code is correct. Most software developers don’t seem to know what is sufficient. Those heavily using vibe coding even get the machine to write their tests.
Then you get to systems design. What global safety and temporal invariants are necessary to ensure the design is correct? Most developers can’t do more than draw boxes and arrows and cite maxims and “best practices” in their reasoning.
Plus you have the Sussman effect: software is often more like a natural science than engineering. There are so many dependencies and layers involved that you spend more time making observations about behaviour than designing for correct behaviours.
There could be useful cases for using GenAI as a tool in some process for creating software systems… but I don’t think we should be taking off our thinking caps and letting these tools drive the entire process. They can’t tell you what to specify or what correct means.
and I think these people are benefitting from it the most, people with expertise, who know their way around and knew what and how to build but did not want to do the grunt work
As a software engineer, I'd love if the industry had an actual breakthrough, if we found a way to make the hard parts easier and prevent software projects from devolving into balls of chaos and complexity.
But not if the only reward for this would be to be laid off.
So, once again, the old question: If reducing jobs is the only goal, but people are also expected to have jobs to be able to pay for food and housing, what is the end goal here? What is the vision that those companies are trying to realize?
It only means job security for people with actual experience.
Sure, 'writing code' is not the difficult often, but when you have time constraints, 'writing code' becomes a limiting factor. And we all do not have infinite time in our hands.
So AI not only enables something you just could not afford doing in the past, but it also allows to spend more time of 'engineering', or even try multiple approaches, which would have been impossible before.
AI is an amplifier of existing behavior.
You can't satisfy every single paranoia, eventually you have to deem a risk acceptable and ship it. Which experiments you do run depends on what can be done in what limited time you have. Now that I can bootstrap a for-this-feature test harness in a day instead of a week, I'm catching much subtler bugs.
It's still on you to be a good engineer, and if you're careful, AI really helps with that.
I was hopeful that the title was written like LLM-output ironically, and dismayed to find the whole blog post is annoying LLM output.
Technology was never equaliser. It just divides more and yes ultimately some developers will get paid a lot more because their skills will be in more demand while other developers will be forced to seek other opportunities.
I feel I become more like a Product than Software Engineer when reviewing AI code constantly satisfying my needs.
And benefits provided by AI are too good. It allows to prototype near to anything in short terms which is superb. Like any tool in right hands can be a dealbreaker.
The model drifts because nothing structurally prevents it from drifting. Telling it "don't touch X" is negotiating behavior with a probabilistic system — it works until it doesn't. What actually worked: separating the workflow into phases where certain actions literally aren't available. Design phase? Read and propose only. Implementation phase? Edit, but only files in scope.
Your security example is even more telling — the model folding under minimal pushback isn't a knowledge gap, it's a sycophancy gradient. No amount of system prompting fixes that. You need the workflow to not ask the model for a judgment call it can't be trusted to hold.
1. Programmers viewing programming through career and job security lens 2. Programmers who love the experience of writing code themselves 3. People who love making stuff 4. People who don't understand AI very well and have knee-jerk cultural / mob reactions against it because that's what's "in" right now in certain circles.
It is fun to read old issues of Popular Mechanics on archive.org from 100+ years ago because you can see a lot of the same personality types playing out.
At the end of the day, AI is not going anywhere, just like cars, electricity and airplanes never went anywhere. It will obviously be a huge part of how people interact with code and a number of other things going forward.
20-30 years from now the majority of the conversations happening this year will seem very quaint! (and a minority, primarily from the "people who love making stuff" quadrant, will seem ahead of their time)
It’s not simpler. It’s faster and cheaper and more consistent in quality. But way more complex.
I think we're all in denial about how bad software engineering has gotten. When I look at what's required to publish a web page today vs in 1996, I'm appalled. When someone asks me how to get started, all I can do is look at them and say "I'm so sorry":
So "coding was always the hard part". All AI does is obfuscate how the sausage gets made. I don't see it fixing the underlying fallacies that turned academic computer science into for-profit software engineering.
Although I still (barely) hold onto hope that some of us may win the internet lottery someday and start fixing the fundamentals. Maybe get back to what we used to have with apps like HyperCard, FileMaker and Microsoft Access but for a modern world where we need more than rolodexes. Back to paradigms where computers work for users instead of the other way around.
Until then, at least we have AI to put lipstick on a pig.
The Visual Basic comparison is more salient. I've seen multiple rounds of "the end of programmers", including RAD tools, offshoring, various bubble-bursts, and now AI. Just because we've heard it before though, doesn't mean it's not true now. AI really is quite a transformative technology. But I do agree these tools have resulted in us having more software, and thus more software problems to manage.
The Alignment/Drift points are also interesting, but I think that this appeals to SWE's belief that that taste/discernment is stopping this happening in pre-AI times.
I buy into the meta-point which is that the engineering role has shifted. Opening the floodgates on code will just reveal bottlenecks elsewhere (especially as AI's ability in coding is three steps ahead and accelerating). Rebuilding that delivery pipeline is the engineering challenge.
Feel like only people like this guy, with 4 decades of experience, understand the importance of this.
Maybe if they "prompted the agent correctly", you get your infrastructure above at least 5 9s.
If we continue through this path, not only so-called "engineers" can't read or write code at all, but their agents will introduce seemingly correct code and introduce outages like we have seen already, like this one [0].
AI has turned "senior engineers" into juniors, and juniors back into "interns" and cannot tell what is maintainable code and waste time, money and tokens reinventing a worse wheel.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
I can't believe this has to be said, but yeah. Code took time, but it was never the hard part.
I also think that it is radically understated how much developers contribute to UX and product decisions. We are constantly having to ask "Would users really do that?" because it directly impacts how we design. Product people obviously do this more, but engineers do it as a natural part of their process as well. I can't believe how many people do not seem to know this.
Further, in my experience, even the latest models are terrible "experts". Expertise is niche, and niche simply is not represented in a model that has to pack massive amounts of data into a tiny, lossy format. I routinely find that models fail when given novel constraints, for example, and the constraints aren't even that novel - I was writing some lower level code where I needed to ensure things like "a lock is not taken" and "an allocation doesn't occur" because of reentrancy safety, and it ended up being the case that I was better off writing it myself because the model kept drifting over time. I had to move that code to a separate file and basically tell the model "Don't fucking touch that file" because it would often put something in there that wasn't safe. This is with aggressively tuning skills and using modern "make the AI behave" techniques. The model was Opus 4.5, I believe.
This isn't the only situation. I recently had a model evaluate the security of a system that I knew to be unsafe. To its credit, Opus 4.6 did much better than previous models I had tried, but it still utterly failed to identify the severity of the issues involved or the proper solutions and as soon as I barely pushed back on it ("I've heard that systems like this can be safe", essentially) it folded completely and told me to ship the completely unsafe version.
None of this should be surprising! AI is trained on massive amounts of data, it has to lossily encode all of this into a tiny space. Much of the expertise I've acquired is niche, borne of experience, undocumented, etc. It is unsurprising that a "repeat what I've seen before" machine can not state things it has not seen. It would be surprising if that were not the case.
I suppose engineers maybe have not managed to convey this historically? Again, I'm baffled that people don't see to know how much time engineers spend on problems where the code is irrelevant. AI is an incredible accelerator for a number of things but it is hardly "doing my job".
AI has mostly helped me ship trivial features that I'd normally have to backburner for the more important work. It has helped me in some security work by helping to write small html/js payloads to demonstrate attacks, but in every single case where I was performing attacks I was the one coming up with the attack path - the AI was useless there. edit: Actually, it wasn't useless, it just found bugs that I didn't really care about because they were sort of trivial. Finding XSS is awesome, I'm glad it would find really simple stuff like that, but I was going for "this feature is flawed" or "this boundary is flawed" and the model utterly failed there.
If you gave an experienced house framer a hammer, hand saw and box of nails and a random person off the street a nail gun and powered saw who is going to produce the better house?
A confident AI and an unskilled human are just a Dunning-Kruger multiplier.