The catch about the "guided" piece is that it requires an already-good engineer. I work with engineers around the world and the skill level varies a lot - AI has not been able to bridge the gap. I am generalizing, but I can see how AI can 10x the work of the typical engineer working in Startups in California. Even your comment about curiosity highlights this. It's the beginning of an even more K-shaped engineering workforce.
Even people who were previously not great engineers, if they are curious and always enjoyed the learning part - they are now supercharged to learn new ways of building, and they are able to try it out, learn from their mistakes at an accelerated pace.
Unfortunately, this group, the curious ones, IMHO is a minority.
This threat, while not yet realized, is very real from a strictly economic perspective.
AI or not, any tool that improves productivity can lead to workforce reduction.
Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
You have a choice: fire 5 people, or produce 2,000 loaves per month. But does the city really need that many loaves?
To make matters worse, all your competitors also have the same semi-automatic ovens...
> If I don’t understand what it’s doing, it doesn’t ship. That’s non-negotiable.
Holy LinkedIn
Instead of asking for answers, I ask for specific files to read or specific command line tools with specific options. I pipe the results to a file and then load it into the CLI session. Then I turn these commands into my own scripts and documentation (in Makefile).
I forbid the model wandering around to give me tons of irrelevant markdown text or generated scripts.
I ask straight questions and look for straight answers. One line at a time, one file at a time.
This gives me plenty of room to think what I want and how I get what I want.
Learning what we want and what we need to do to achieve it is the precious learning experience that we don’t want to offload to the machine.
This is not a dig at AI. If I take this article at face value, AI makes people more productive, assuming they have the taste and knowledge to steer their agents properly. And that's possibly a good thing even though it might have temporary negative side effects for the economy.
>But the AI is writing the traversal logic, the hashing layers, the watcher loops,
But unfortunately that's the stuff I like doing. And also I like communing with the computer: I don't want to delegate that to an agent (of course, like many engineers I put more and more layers between me and the computer, going from assembly to C to Java to Scala, but this seems like a bigger leap).
Where do you draw the line between just enough guidance vs too much hand holding to an agent? At some point, wouldn't it be better to just do it yourself and be done with the project (while also build your muscle memory, experiences and the mental model for future projects, just like tons of regular devs have done in the past)
But I think the future of programming is english.
Agent frameworks are converging on a small set of core concepts: prompts, tools, RAG, agent-as-tool, agent handoff, and state/runcontext (an LLM-invisible KV store for sharing state across tools, sub-agents, and prompt templates).
These primitives, by themselves, can cover most low-UX application business use cases. And once your tooling can be one-shotted by a coding agent, you stop writing code entirely. The job becomes naming, describing, and instructing and then wiring those pieces together with something more akin to flow-chart programming.
So I think for most application development, the kind where you're solving a specific business problem, code stops being the relevant abstraction. Even Claude Code will feel too low-level for the median developer.
The next IDE looks like Google Docs.
This is the first time that I feel a level of anxiety when I am not actively doing it. What a crazy shift that I am still so excited and enamored by the process after all of this time.
But there's also the double edged sword. I am also having a really hard time moderating my working hours, which I naturally struggle with anyway, even more. Partly because I am having so much fun and being so productive. But also because it's just so tempting to add 1 more feature, fix one more bug.
Except the rest of the article strongly implies he feels pretty good about it, assuming you can properly supervise your agents.
< I haven’t written a boilerplate handler by hand in months. I haven’t manually scaffolded a CLI in I don’t know how long. I don’t miss any of it.
Sounds like the author is confused or trying too hard to please the audience. I feel software engineering has higher expectation to move faster now, which makes it more difficult as a discipline.
I personally code data structures and algorithms for 1 - 2 hrs a day, because I enjoy it. I find it also helps keeps me sharp and prevents me from building too much cognitive debt with AI generated code.
I find most AI generated code is over engineered and needs a thorough review before being deployed into production. I feel you still have to do some of it yourself to maintain an edge. Or at least I do at my skill level.
I'm not sure how this sustains though; like, I can't help but think this technology is going to dull a lot of people's skills, and other people just aren't going to develop skills in the first place. I have a feeling a couple years from now this is going to be a disaster (I don't think AGI is going to take place and I think the tools are going to get a lot more expensive when they start charging the true costs)
I'm sure some people are having fun that way.
But I'm also sure some people don't like to play with systems that produce fuzzy outputs and break in unexpected moments, even though overall they are a net win. It's almost as if you're dealing with humans. Some people just prefer to sit in a room and think, and they now feel this is taken away from them.
If you're going in the right direction, acceleration is very useful. It rewards those who know what they're doing, certainly. What's maybe being left out is that, over a large enough distribution, it's going to accelerate people who are accidentally going in the right direction, too.
There's a baseline value in going fast.
I'm being a little facetious, but I don't think it's far off the mark from what TFA is saying, and it matches my experience over the past few months. The worst architects we ever worked with were the ones who couldn't actually implement anything from scratch. Like TFA says, if you've got the fundamentals down and you want to see how far you can go with these new tools, play the role of architect for a change and let the agents fly.
All your incantations can't protect you
Once you have your 50k line program that does X are you really going to go in there and deeply review everything? I think you're going to end up taking more and more on trust until the point where you're hostage to the AI.
I think this is what happens to managers of course - becoming hostage to developers - but which is worse? I'm not sure.
I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.
I don't understand how people are making anything that has any level of usefulness without a feedback loop with them at the center. My agents often can go off for a few minutes, maybe 10, and write some feature. Half of the time they will get it wrong, I realize I prompted wrong, and I will have to re-do it myself or re-do the prompt. A quarter of the time, they have no idea what they're doing, and I realize I can fix the issue that they're writing a thousand lines for with a single line change. The final quarter of the time I need to follow up and refine their solution either manually or through additional prompting.
That's also only a small portion of my time... The rest is curating data (which you've pretty much got to do manually), writing code by hand (gasp!), working on deployments, and discussing with actual people.
Maybe this is a limitation of the models, but I don't think so. To get to the vision in my head, there needs to be a feedback loop... Or are people just willing to abdicate that vision-making to the model? If you do that, how do you know you're solving the problem you actually want to?
That’s the “money quote,” for me. Often, I’m the one that causes the problem, because of errors in prompting. Sometimes, the AI catches it, sometimes, it goes into the ditch, and I need to call for a tow.
The big deal, is that I can considerably “up my game,” and get a lot done, alone. The velocity is kind of jaw-dropping.
I’m not [yet] at the level of the author, and tend to follow a more “synchronous” path, but I’m seeing similar results (and enjoying myself).
That would've taken me 3 months a year ago, just to learn the syntax and evaluate competing options. Now I can get sccache working in a day, find it doesn't scale well, and replace it with recc + buildbarn. And ask the AI questions like whether we should be sharding the CAS storage.
The downside is the AI is always pushing me towards half-assed solutions that didn't solve the problem. Like just setting up distributed caching instead of compilation. It also keeps lying which requires me to redirect & audit its work. But I'm also learning much more than I ever could without AI.
There are two problems left, though.
One is, laypersons don't understand the difference between "guided" and "vibe coded". This shouldn't matter, but it does, because in most organizations managers are laypersons who don't know anything about coding whatsoever, aren't interested by the topic at all, and think developers are interchangeable.
The other problem is, how do you develop those instincts when you're starting up, now that AI is a better junior coder than most junior coders? This is something one needs to think about hard as a society. We old farts are going to be fine, but we're eventually going to die (retire first, if we're lucky; then die).
What comes after? How do we produce experts in the age of AI?
We’re getting there..
oh no... this is one of my "uncanny valley" AI tropes
We might all be AI users now, though.