Nothing about cajoling a model to write what you want it to is essential complexity in software dev.
In addition when you do a lot of building with no theory you tend you make lots and lots of new non-essential complexity.
Devtools are no exception. There was already lots of nonessential complexity in them and in the model era is that gone? ...no don't worry it's all still there. We built all the shiny new layers right on top of all the old decaying layers, like putting lipstick on a pig.
https://github.blog/news-insights/octoverse/octoverse-a-new-...
Ah, a work of fiction.
LLMs are all about probabilistic programming. While they are harnessed by a lot of symbolic processing (tokens as simple example), the core is probabilistic. No hard rules can be learned.
And, for what it worth, "Real programmers don't use Pascal" [1] was not written about assembler programmers, it was written about Fortran programmers, a new Priesthood.
[1] https://web.archive.org/web/20120206010243/http://www.ee.rye...
Thus, what I expect is for new Priesthood to emerge - prompt writing specialists. And this is what we see, actually.
Because model output can vary widely from invocation to invocation, let alone model to model, prompts aren't reliable abstractions. You can't send someone all of the prompts for a vibecoded program and know they will get a binary with generally the same behavior. An effective programmer in the LLM age won't be saving mental energy by reasoning about the prompts, they will be fiddling with the prompts, crossing their fingers that it produces workable code, then going back to reasoning about the code to ensure it meets their specification.
What I think the discipline is going to find after the dust settles is that traditional computer code is the "easiest" way to reason about computer behavior. It requires some learning curve, yes, but it remains the highest level of real "abstraction", with LLMs being more of a slot machine for saving the typing or some boilerplate.
With programming languages, there is a transparent homomorphism between the code you write and what the machine actually executes. Programmers use this property of programming languages to execute considerable control over the computational process they're evolving while still potentially at a high level of abstraction. With LLMs, the mapping between your input and the executable output is opaque and nondeterministic. This drives people like me batty.
A while back I wrote a Lisp koan:
https://www.hackersdictionary.com/html/Some-AI-Koans.html
Mine goes like this:
"A student travelled to the East to hear Sussman's teachings. Sussman was expounding on low-level development in Lisp, when the student interrupted him. 'Master Sussman,' said he, 'is not Lisp a high-level language, indeed perhaps the highest-level of all programming languages?' Sussman replied, 'Once there was a man who was walking along the beach when he spotted an eagle. Brother eagle, said he, how impossibly distant is the sky! The eagle said nothing, and flew away.' Thus the student was enlightened."
The story is in some sense true: I did meet Gerald Sussman and was in some sense enlightened by him. Another hacker was talking about working in a "low-level language" like Lisp, and I corrected him telling him that Lisp was in fact very high level. He said "Uh... I need Jerry to explain this to you. Jerry? Can you come here a minute?" "Jerry" was Gerald Sussman, who proceeded to explain to me that Lisp was a virtual machine, one which he implemented in silico for his Ph.D. thesis:
https://dspace.mit.edu/handle/1721.1/5731
Thus I was enlightened. If Lisp is a virtual machine, so are all programming languages. And even a buck JavaScript kiddie fresh out of boot camp, working in React, is working in machine code for the JavaScript+browser+React VM. An abstraction is a point of view, and the programmer working in a "high level" programming language is really just working in the same medium as machine code: computation itself. But with a point of view that offers more convenience.
LLM work is different. LLMs are enormously complicated algorithms that give probabilistic interpretations of loose, informal human languages. So instructing a computer through the filter of an LLM is inherently probabilistic, not to mention damn near inscrutable.
This is why LLMs are being met with more resistance even than compilers were. They're not the same thing. Compilers scaled the work, which remained essentially the same. LLMs are changing it.
what if AI is better at tackling essential complexity too?
There will always be someone ready to drag down prices of computation low enough so that it is then democratized for all, some may disagree but that would eventually be local inference as computer hardware gets better with clever software algorithms.
In this AI story, you can take a guess who are the "The Priesthood" of the 2020s are.
> You still have to know what you want the computer to do, and that can be very hard. While not everyone wrote computer programs, the number of computers in the world exploded.
One can say, the number of AI agents will explode and surpass humans on the internet in the next few years, and reading the code and understanding what it does when generated from an AI will be even more important than writing it.
So you do not get horrific issues like this [0] since now the comments in the code are now consumed by the LLM and due to their inherent probabilistic and unpredictable nature, different LLMs produce different code and cannot guarrantee that it is correct other than a team of expert humans.
We'll see if you're ready to read (and fix) an abundance of lots of AI slop and messy architectures built by vibe-coders as maintainance costs and security risks skyrocket.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...