This hits harder depending on how much money, social capital, or debt you accumulated before this volatility began. If you’ve paid off your debts, bought a house, and stabilized your family life, you’re gambling with how comfortable the coming years will be. If you’re a fresh grad with student debt, no house, and no social network, you’re more or less gambling with your life.
The question is, how much faster is verification only vs writing the code by hand? You gain a lot of understanding when you write the code yourself, and understanding is a prerequisite for verification. The idea seems to be a quick review is all that should be needed "LGTM". That's fine as long as you understand the tradeoffs you are making.
With today's AI you either trade speed for correctness or you have to accept a more modest (and highly project specific) productivity boost.
If AI automated entry-level tasks from today, that just means "entry-level" means something different now. It doesn't mean entry-level ceases to exist. Entey-level as we know it, but not entry-level in general.
They're still doing it.
This study showing 9-10% drop is odd[1] and I'm not sure about their identification critria.
> We identify GenAI adoption by detecting job postings that explicitly seek workers to implement or integrate GenAI technologies into firm workflows.
Based on that MIT study it seems like 90+% of these projects fail. So we could easily be seeing an effect where firms posting these GenAI roles are burning money on the projects in a way that displaces investment in headcount.
The point about "BigTech" hiring 50% fewer grads is almost orthogonal. All of these companies are shifting hiring towards things where new grads are unlikely to add value, building data centers and frontier work.
Moreover the TCJA of 2017 caused software developers to not count for R&D tax write offs (I'm oversimplifying) starting in 2022. This surely has more of an effect than whatever "GenAI integrator roles" postings correlates to.
It is a new and exciting tool but immediately limited with medium complex tasks. Also we will see a lot more code with tricky bugs coming out of AI assistants and all of that needs to be maintained. If software development gets cheaper per line of code then there will be more demand. And someone has to clean up the mess created by people who have no clue whatsoever of SWE.
Once upon a time people developed software with punch hole cards. Even without AI a developer today is orders of magnitude more proficient than that.
The only thing I hope I am not going to see in my lifetime is real artificial intelligence.
I find this one hard to believe. Software is already massively present in all these industries and has already replaced jobs. The last step is complete automation (ie drone tractors that can load up at a hub, go to the field and spray all by themselves) but the bottleneck for this isn't "we need more code", it's real-world issues that I don't see AI help solving (political, notably)
I don't understand the take that a junior with AI is able to replace a small team. Maybe a horribly performing small team? Even then, wouldn't it just be logical to outfit the small team with AI and then have a small team of small teams?
The alleged increased AI output of developers has yet to be realized. Individuals perceive themselves as having greatly increased output, but the market has not yet demonstrated that with more products (or competitors to existing products) and/or improved products.
> Narrow specialists risk finding their niche automated or obsolete
Exactly the opposite. Those with expertise will oversee the tool. Those without expertise will take orders from it.
> Universities may struggle to keep up with an industry that changes every few months
Those who know the theory of the craft will oversee the machine. Those who dont will take orders from it. Universities will continue to teach the theory of the discipline.
For the record, I was genuinely trying to read it properly. But it is becoming unbearable by mid article.
I'm not sure I agree with that. Right now as a senior my task involves reviewing code from juniors; replace juniors with AI and it means reviewing code from AI. More or less the same thing.
This is what I expect to happen, but why would these entry-level roles be "developers". I think it's more likely that they will be roles that already exist in those industries, where the responsibilities include (or at least benefit from) effective use of AI tools.
I think the upshot is that more people should probably be learning how to work in some specific domain while learning how to effectively use AI to automate tasks in that domain. (But I also thought this is how the previous iteration of "learn to code" should be directed, so maybe I just have a hammer and everything looks like a nail.)
I think dedicated "pure tech" software where the domain is software rather than some other thing, is more likely to be concentrated in companies building all the infrastructure that is still being built to make this all work. That is, the models themselves and all the surrounding tools, and all the services and databases etc. that are used to orchestrate everything.
Engineers > developers > coders.
> A CEO of a low-code platform articulated this vision: in an “agentic” development environment, engineers become “composers,”
I see we'll be twisting words around to keep avoiding the comparison.
Where is all the new and improved software output we’d expect to see?
A humble way for devs to look at this, is that in the new LLM era we are all juniors now.
A new entrant with a good attitude, curiosity and interest in learning the traditional "meta" of coding (version control, specs, testing etc) and a cutting-edge, first-rate grasp of using LLMs to assist their craft (as recommended in the article) will likely be more useful in a couple of years than a "senior" dragging their heels or dismissing LLMs as hype.
We aren't in coding Kansas anymore, junior and senior will not be so easily mapped to legacy development roles.
Curious about how the Specialist vs Generalist theme plays out, who is going to feel it more *first* when AI gets better over time?
Tech layoffs have been happening even before LLMs.
On the optimistic take side - I suspect it might end up being true that software might be infused into more niches but not sure it follows that this helps on the jobs market side. Or put different demand for software and SWE might decouple somewhat for much of that additional software demand.
1) The AI code maintainence question - who would maintain the AI generated code 2) The true cost of AI. Once the VC/PE money runs out and companies charge the full cost, what would happen to vibe coding at that point ?
Not everyone can afford it, and then we are at the point of changing the field that was so proud about just needing a computer and access to internet to teach oneself into a subscription service.
Agreed but it's not an easy charge to fulfill.
Ah, there it is.
In today's corporate environment, 70% of the costs are in management and admin muddlement, do we really think these "people skills" translate into anything useful in an AI economy?
The junior devs have far more hope.
If I were starting out today, this is basically the only advice I would listen to. There will indeed be a vacuum in the next few years because of the drastic drop in junior hiring today.
In my opinion we always needed to be versatile to stand any chance of being comfortable in these insanely rapid changing times.
My value so far in my career has been my very broad knowledge of basically the entire of computer science, IT, engineering, science, mathematics, and even beyond. Basically, I read a lot, at least 10x more than most people it seems. I was starting to wonder how relevant that now is, given that LLMs have read everything.
But maybe I'm wrong about what my skill actually is. Everyone has had LLMs for years now and yet I still seem better at finding info, contextualising it and assimilating it than a lot of people. I'm now using LLMs too but so far I haven't seen anyone use an LLM to become like me.
So I remain slightly confused about what exactly it is about me and people like me that makes us valuable.
I’m not saying that this was prompted. I’m just summarizing it in my own way.
A few key fallacies at play here.
- Assuming a closed world assumption: we'll do the same amount of work but with less people. This has never been true. As soon as you meaningfully drop the price of a unit of software (pick your favorite), demand goes up and we'll need more of them. Also it opens the door to building software that previously would have been too expensive. That's why the amount of software engineers has consistently increased over the years. This despite a lot of stuff getting a lot easier over time.
- Assuming the type of work always stays the same. This too has never been true. Stuff changes over time. New tools, new frameworks, new types of software, new jobs to do. And the old ones fade away. Being a software engineer is a life of learning. Very few of us get to do the same things for decades on end.
- Assuming people know what to ask for. AIs do as you ask, which isn't necessarily what you want. The quality of what you get correlates very much to your ability you ask for it. The notion that you get a coherent bit of software in response to poorly articulated incoherent prompts is about as realistic as getting a customer to produce coherent requirements. That never happened either. Converting customer wishes into maintainable/valuable software is still a bit of a dark art.
The bottom line: many companies don't have a lot of in house software development capacity or competence. AI doesn't really help these companies to fix that in exactly the same way that Visual Basic didn't magically turn them into software driven companies either. They'll use third party companies to get the software they need because they lack the in house competence to even ask for the right things.
Lowering the cost just means they'll raise the ambition level and ask for more/better software. The type of companies that will deliver that will be staffed with people working with AI tools to build this stuff for them. You might call these people software engineers. Demand for senior SEs will go through the roof because they deliver the best AI generated software because they know what good software looks like and what to ask for. That creates a lot of room for enterprising juniors to skill up and join the club because, as ever, there simply aren't enough seniors around. And thanks to AI, skilling up is easier than ever.
The distinction between junior and senior was always fairly shallow. I know people that were in their twenties that got labeled as senior barely out of college. Maybe on their second or third job. It was always a bit of a vanity title that because of the high demand for any kind of SEs got awarded early. AI changes nothing here. It just creates more opportunities for people to use tools to work themselves up to senior level quicker. And of course there are lots of examples of smart young people that managed to code pretty significant things and create successful startups. If you are ambitious, now is a good time to be alive.
Wasn't the main take away generally "study everything even more than you were, and talk/network to everybody even more than you were, and hold on. Work more more more"
There's an implicit assumption in the article that the coding models are here to stay in development. It's possible that assumption is incorrect for multiple reasons.
Maybe (as some research indicates) the models are as good as they are going to get. They're always going to be a cross between a chipper stochastic parrot and that ego inflated junior dev that refuses to admit a mistake. Maybe when the real (non-subsidized) economics present themselves, the benefit isn't there.
Perhaps the industry segments itself to a degree. There's a big difference in tolerance for errors in a cat fart app and a nuclear cooling system. I can see a role for certified 100% AI free development. Maybe vibe coders go in one direction, with lower quality output but rapid TTM, but a segment of more highly skilled developers focus on AI free development.
I also think it's possible that over time the AI hyper-productivity stuff is revealed to be mostly a mirage. My personal experience and a few studies seem to indicate this. The purported productivity boost is a result of confirmation bias and ridiculous metrics (like LOC generated) that have little to do with actual value creation. When the mirage fades, companies realize they are stuck with heaps of AI slop and no technical talent able to deal with it. A bitter lesson indeed.
Since we're reading tea leaves, I think the most likely outcome is that the massive central models for code generation fade due to enormous costs and increased endpoint device capabilities. The past 50 years have shown us clearly that computing will always distribute, and centralized mainframe style compute gets pushed down to powerful local devices.
I think it settles at an improved intellisense running locally. The real value of the "better search engine" that LLMs hold today reduces as hard economics drive up subscription fees and content is manipulated by sponsors (same thing that happened to the Google search results).
For end users, I think the models get shoved into a box to do things they're really good at, like giving a much more intuitive human-computer interface, but structured data from that is handed off to a human developer to reason about, MCP will expand and become the glue.
I think that over time market forces will balance between AI and human created content, with a premium placed on the latter. McDonalds vs a 5 star steakhouse.