Do we instead hire a small number of people as apprentices to train on the high level patterns, spot trouble areas, develop good 'taste' for clean software? Teach them what well organized, modular software looks like on the surface? How to spot redundancy? When to push the AI to examine an area for design issues, testability, security gaps? Not sure how to train people in this new era, would love to hear other perspectives.
> They didn’t trade speed for learning. They traded learning for nothing. There was no trade-off. There was just loss.
I believe this conclusion is due to a methodological problem, a form of begging the question if you will. One thing I am certain of is that humans who set their mind to something learn something, and good programmers are among the most tenacious in setting their mind to something. With agentic coding, they definitely learn different things, and so I would expect syntax knowledge to be weaker, but debugging and review skill will increase overall. Why? Because there will be more code, and more breakage, and I still haven't seen any tooling that allows a non-technical person to be effective at this.
Programming knowledge has always had a half-life. The way I see it, this is a big sea change that will fundamentally change the job of software engineers, and some non-trivial percentage will either change careers or find a sheltered slow-moving place to finish out their working years. But for those who were not attached to hand-crafted code, AI provides power tools that empower technically minded people more than anyone else. I have full faith that younger generation still has the same distribution of technical potential, and they will still find ways to develop their craft just as previous generations of hackers have always done.
Or, as I tell my students, "Every failure is a growth opportunity." I let them resubmit corrected projects for points, too. I'm desperate for them to get the reps in that they'd normally have had as juniors in the field.
I don't want code from someone/something that doesn't know the needs of the business, cannot find where to compromise effectively, does not understand the deployment environments their app will run in, would not know how to respond to an incident with their application in production, etc.
I don't think writing code with AI is relevant to career progress at all. What matters that I can hold someone accountable for the code we have in prod, and they'd better have answers or they don't have a job.
If they are dependable there, only then they can be trusted with more responsibility. That's all we're really talking about. You get paid to be accountable. You do not get paid to do one narrow thing well. It should not take you a decade to read and write code quickly and effectively. I'd argue that should have happened when you were in high school and college (how it was for everyone in upper management right now).
I feel like the quality of new hires has progressively become worse over the years, and we have made so many concessions to remedy it (AI included), and all it's doing is making the problem worse.
http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...
We didn't care until the same process came knocking on the door for us.
And how many years have we had capable AI? Maybe it's going to take a similar timeline for people to figure out how to be good with an AI assisted workflow (if not fully automated.)
Bullshit. The busywork wasn't being done by low level engineers to train them up, they were doing it because it needed doing, it was undesirable, and they were lowest on the totem pole.
Jobs are self training. Sure doing other jobs may give you some intuition that can be applied to new jobs. Manually writing code and fixing your human created mistakes obviously carries over for debugging AI written code. But people who start their careers with AI written code will also learn how to debug AI code. You don't learn how to architect a system by coding a system somebody else architected. At best you might pick up some common patterns by osmosis, but this often breeds worse engineers who do things as they have been done in the past without understanding why and without regard to how they really ought to be done. True understanding of why A was chosen in this case and B works better in another comes from actually doing the high level work.
Indeed, if AI usage is like any other tool that has come before it, those who grow up using it will be much more adept at utilizing it in practice than those who are adopting it now after spending a lifetime learning a different skillset. We don't exactly lament how much worse software engineers have gotten since they no longer learn how to sort their punch cards if they drop them.
Even if you are of the opinion that the tasks junior engineers do, which now AI can do, are fundamental to becoming competent at higher level skills, that's no problem. You can train people without them doing value-added work. Have engineers code the old fashioned way for training purposes. It's no different from doing math problems despite calculators existing. This is a problem only for extracting underpaid labor out of junior engineers with the lie that they are being paid in experience.
There was a massive step-change in the capability of these models towards the end of 2025.
There is just no way that an experienced developer should be slower using the current tools. Doesn't match my experience at all.
The title of the article, though - absolutely true IMO