There is a third shape. Experts who have become so reliant / accustomed to AI that it dilutes their previously sharp judgment and, importantly, taste. I am seeing more and more work produced by experts which seems strangely out of character. A needlessly verbose text written by someone who was previously allergic to verbosity. An over-engineered solution (complete with CLI, storage backend, documentation, unit tests) for a trivial problem which that person would've solved by an elegant bash one-liner only 3 years ago. The work itself is always completely immune to any rational criticism, as it checks all the boxes: extensive documentation, scalable, high test coverage, perfect code style, and for texts perfect grammar, non-offensive, seemingly objective. But, for lack of a better word, it simply lacks taste.
Great article. The "elongation" of workplace artifacts resonated with me on such deep level. Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays. Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).
So now the "productivity-gain bottleneck" is people who still care enough to review manually.
Right now we're in a gold rush. Companies, that be established ones or startups, are in a frenzy to transform or launch AI-first products.
You are not rewarded for building extremely robust and fast systems - the goal right now is to essentially build ETL and data piping systems as fast as humanly (or inhumanly) possible, and being able to add as many features as possible. The quality of the software is of less importance.
And, yes, senior engineers with other priorities are being overshadowed - even left in the dust - if they don't use tools to enhance their speed. As the article states, there are novice coders, even non-coders that are pushing out features like you wouldn't believe it. As long as these yield the right output, and don't crash the systems, that's a gold star.
Of course there are still many companies whose products do not fall under that, and very much rely on robust engineering - but at least in the startup space there's overwhelmingly many whose product is to gather data (external, internal), add agents, and do some action for the client.
You need extremely competent, and critically thinking technical leaders on the top to tackle this problem. But we're also in the age where people with somewhat limited technical experience are becoming CTOs or highly-ranked technical workers in an org, for no other reason than that they know how to use modern AI systems, and likely have a recent history of being extremely productive.
His main point, though, is this:
I have a colleague ... who spent two months earlier this year building a system that should have been designed by someone with formal training in data architecture. He used the tools well, by the standards by which use of the tools is currently measured. He produced a great deal of code, a great deal of documentation, a great deal of what looked, to anyone who did not know what to look for, like progress. He could not, when asked, explain how any of it actually worked. The work was wrong from the first day. The schemas, and more importantly the objectives, were wrong in a way that would have been obvious to anyone with two years in the field.
I've been reading many rants like that lately. If they came with examples, they would be more helpful. The author does not elaborate on "the schemas, and more importantly the objectives, were wrong". The LLM's schema vs. a "good" schema should have been in the next paragraph. That would change the article from a rant to a bug report. We don't know what went wrong here.
It's not clear whether the trouble is that the schema can't represent the business problem, or that the database performance is terrible because the schema is inefficient. If you have the schema and the objectives, that's close to a specification. Given a specification, LLMs can potentially do a decent job. If the LLM generates the spec itself, then it needs a lot of context which it probably doesn't have.
This isn't necessarily an LLM problem. Large teams producing in-house business process systems tend to fall into the same hole. This is almost the classic way large in-house systems fail.
My company is full of managers who haven't written code in years. They hired an architect 18 months ago who used AI to architect everything. To the senior devs it was obvious - everything was massively over engineered, yet because he used all the proper terminology he sounded more competent to upper management than the other senior managers who didn't. When called out, he would result to personal attacks.
After about 6 months, several people left and the ones who stayed went all in on AI. They've been building agentic workflows for the past 12 months in an effort to plug the gap from the competent members of staff leaving.
The result, nothing of value has been released in the past 18 months. The business is cutting costs after wasting massive amounts on cloud compute on poorly designed solutions, making up for it by freezing hiring.
If people aren't aligned with the organization then bad, BAD things happen when the political people get access to AI and there's basically nothing you can do about it. They can use AI to fake things for a very extended time, then always find the most optimal way to cover up the problem before the consequences surface and at that point they've already moved so far up the ladder that the consequences don't matter to them anymore. IMO I think it's actively unsolvable in any org that is already deeply infested with politics.
On the other hand, having really smart people has massively increased in value. The only way to surface them is through naturally selecting on actual merit which only an entrepreneurship environment can reliably provide.
All of this means that I think startups with star teams are going to absolutely dominate for a few years (as in not just executing faster but with less bandwidth, but literally outright winning in everything) until near-full AI automation starts making the big firms win again simply by virtue of throwing tokens at the problem.
* Many software engineers didn't do real engineering work during their entire careers. In large companies it's even harder - you arrive as a small gear and are inserted into a large mechanism. You learn some configuration language some smart-ass invented to get a promo, "learn" the product by cleaning tons of those configs, refactoring them, "fixing" results in another bespoke framework by adjusting some knobs in the config language you are now expert in. Five years pass and you are still doing that.
* There are many near-engineering positions in the industry. The guy who always told how he liked to work with people and that's why stopped coding, another lady who always was fascinated by the product and working with users. They all fill in the space in small and large companies as .*M
* The train is slow moving, especially in large companies. Commit to prod can easily span months, with six months being a norm. For some large, critical systems, Agentic code still didn't reach the production as of today.
Considering above, AI is replacing some BS jobs, people who were near-code but above it suddenly enjoy vibe-coding, their shit still didn't hit the fan in slow moving companies. But oh man, it looks like a productivity boom.
- intelligent autocomplete: the "OG" llm use for most developers where the generated code is just an extension of your active thought process. where you maintain the context of the code being worked on, rather than outsourcing your thinking to the llm
- brainstorming: llms can be excellent at taking a nebulous concept/idea/direction and expand on it in novel ways that can spark creativity
- troubleshooting: llms are quite good at debugging an issue like a package conflict, random exception, bug report, etc and help guide the developer to the root cause. llms can be very useful when you're stuck and you don't have a teammate one chair over to reach out to
- code review: our team has gotten a lot of value out of AI code review which tends to find at least a few things human reviewers miss. they're not a replacement for human code review but they're more akin to a smarter linting step
- POCs: llms can be good at generating a variety of approaches to a problem that can then be used as inspiration for a more thoughtfully built solution
these uses accelerate development while still putting the onus on the developers to know what they're building and why.
related, i feel it's likely teams that go "all in" on agentic coding are going to inadvertently sabotage their product and their teams in the long run.
This made me think of How I ship projects at big tech companies[1], specifically "Shipping is a social construct within a company. Concretely, that means that a project is shipped when the important people at your company believe it is shipped."
I just finished working with a client that is producing documents as described in this quote. The first time I recognized it was when someone sent me a 13-page doc about a process and vendor when I needed a paragraph at most. In an instant, my trust in that person dropped to almost zero. It was hard to move past a blatant asymmetry in how we perceived each other’s time and desire to think and then write concise words.
i have found some small amusement by responding in kind to people that do this (copy/pasting their ai output into my ai, pasting my ai response back). two humans acting as machines so that two machines can cosplay communicating like humans.
AI is incredible in three scenarios: a) what I just described, to get you started, b) to generate artifacts that can be rigorously checked (and I don't mean tests, I mean proofs), c) where your artifacts don't have a meaningful notion of correctness, like a work of art.
c) is a matter of taste, b) certainly scales, but a) is where I think trust will be essential, and I am not ready to trust anyone with that except myself.
Oh, and I think currently, c) is applied to software engineering, by people who cannot distinguish the engineering from the art part of software. Which is just funny right now, and will eventually be catastrophic.
Ditto. LLMs will somehow find fault in code that I know is correct when I tell it there’s something arbitrarily wrong with it.
Problem is LLMs often take things literally. I’ve never successfully had LLMs design entire systems (even with planning) autonomously.
> time wasted using AI on tasks that did not need it, on artifacts no one will read, on processes that exist only because the tool made it cheap to construct them. On decks that spell out things that previously didn’t even need to be said or were assumed.
I work at MSFT and at-least in my org, this is happening at warp speed. Every document I read, my first thoughts are what is the kernel of the idea that the writer was trying to convey ? Because 95% of the content of the doc is just verbiage. You can always tell its verbiage, the em-dashes, the rhythmic text, the green check mark emoji etc. We are hoping that volume of output will make up for the quality or lack thereof. More markdown files, more AGENTS.md file but is that making us better developers ? It certainly is giving the illusion that we are faster but I don't know how management thinks this will lead to tangible impact on the top line or bottom line.
In my experience, some of the best writing (in design docs and PM specs) at MSFT have been human written. You can see the clarity of purpose from the writer, ithere is no need to read it again, it is equivalent to having a 1-on-1 with the writer themselves. But AI written slop, the less said the better.
This piece hits home, I wonder how the experience is at other Big Tech companies.
For example, I was tasked to look into a company-wide solution for a particular architectural problem. I thought delivering a sound solution would give me some kudos, alas, I wasn't fast enough. An intern had already figured it out and wrote a TOD. I find myself too tired to compete.
Some of the sources I need to use come from agencies in the government or working with the government and are often over a thousand pages long.
So AI has been incredibly helpful here because a lot of what I need to do is map this huge bureaucratic set of guidelines and policies to each customer’s particular situation.
Aware of the sloppy nature of LLMs I created my own workflow that resembles more coding than document drafting.
I use Codex, VSCode and plain markdown, I don’t use MS Word or Copilot like all my other colleagues.
I invest a great deal of time still doing manual labor like researching and selecting my sources, which I then make available for Codex to use as its single source of truth.
I start with a skill that generates the outline which often is longer than it should be. Sometimes I get say a 18 sections outline and I ask Codex to cut it in half. Then I ask for a preliminary draft of each section (each on a separate markdown) and read through and update as necessary, before I ask the agent to develop each section in full, then proof read and update again.
When I’m satisfied I merge all the sections into one single markdown and run another skill to check for repetition, ambiguity, length, etc and usually a few legitimate improvements are recommended.
The whole process can still take me several days to produce a 20-30 pages compliance document, which gets read, verified and approved by myself and others in my team before it goes out.
The productivity gains are pretty obvious, but most importantly I think the content is of better quality for the customer.
More precisely, this feels like a person who would be loved by management. The article almost reads like a practical manual for increasing perceived productivity inside a company.
The argument is repetitive:
1. AI generates convincing-looking artifacts without corresponding judgment. 2. Organizations mistake those artifacts for progress. 3. Managers mistake volume for competence.
The article explains this same structure several times. In fact, the three main themes are mostly variations of the same claim: AI allows people to produce output without having the competence to evaluate it.
The problem is that the article is criticizing a context in which one-page documents become twelve-page documents, while containing the same problem in its own form.
The references also do not seem to carry much real argumentative weight. They mostly decorate an already intuitive workplace complaint with academic authority. This is something I often observe in organizations: find a topic management already wants to hear about, repeat the central thesis, and cite a large number of studies that lean in the same direction.
There is also an irony here. The article criticizes a certain kind of workplace artifact, but gradually becomes very close to that artifact itself. This kind of failrue criticizing a pattern while reproducing it seems almost like a recurring custom in the programming industry.
Personally, I almost regret that this person is not in the same profession as me. If someone like this had been a freelancer, perhaps the human rights of freelancers would have improved considerably.
For the most part.
In this case, it decided to give me a whole bunch of crazy threaded code, and, for the first time, in many years, my app started crashing.
My apps don't crash. They may have lots of other problems, but crashing isn't one of them. I'm anal. Sue me.
For my own rule of thumb, I almost never dispatch to new threads. I will often let the OS SDK do it, and honor its choice, but there's very few places that I find spawning a worker, myself, actually buys me anything more than debugging misery. I know that doesn't apply to many types of applications, but it does apply to the ones I write.
The LLM loves threads. I realized that this is probably because it got most of its training code from overenthusiastic folks, enamored with shiny tech.
Anyway, after I gutted the screen, and added my own code, the performance increased markedly, and the crashes stopped.
Lesson learned: Caveat Emptor.
This resonates. It's a spectacular full-reversal kind of tragedy because it used to be asymmetric the other way. Author puts in 10 effort points compiling valuable information and reader puts in 1 effort points to receive the transmission.
An example of a new feature in the company goes the following way:
- some request is raised by person1
- PR is generated with an "agent" by person2
- PR is reviewed using an "agent" by person3
- feature is merged and shipped
- person1 is happy and records a video with a feature to be shown to the clients
- in a next call with the leadership this feature is declared as a success
It all looks good until you look at the implementation, not only that there is very little time to intervene. I find myself recently trying to quickly review PRs before they get quickly merged, just to be on a safe side as people do not even look at the code.
What we, collectively as a species are building now with AI is a mirror that reflects the failures and successes we contributed to.
No engineer here has a perfect record. No senior or principal either. We make a ton of mistakes that are rarely written about.
This is an opportunity for the ones that assume they have mastered the craft to put up or shut up. Anyone can write a blog with or without AI.
Put your skills to work and implement the system that solves the problem you lament. Otherwise, get off my lawn.
Its another voice screaming into the void without offering a solution. The solution is not to build a faster horse. It is not to reminisce about the past. That ship sailed.
Fix the problem. It's the 100th blog repeating the same thing we've read for two years. Nothing was accomplished here except wasting time on the obvious to pat yourself on the back.
A lot of time is being wasted writing blogs raising red flags.
That's the easy part.
> Also, those that claimed this article is ironically a casualty of it’s own complaint are 100% right, Kudos.
Why would the article be a casualty of its own complaint?
I’ve not seen a cohesive statement on what the world looks like when LLMs can do work perfectly (which on a long enough timeline is coming).
Do Google/ Anthropic / OpenAI capture all value, do clients still want consultancies, if the client wants something that a human would use to do something does that project hold any value in an LLM dominant world, why even bother.
> Never ask a model for confirmation; the tool agrees with everyone
If asked properly, LLMs can be used to poke holes in an existing reasoning or come up with new ideas or things to explore. So yes, never ask a model for confirmation or encouragement; but you can absolutely ask it to critique something, and that's often of value.
The entire article resonates, but that particular passage get at the core of a lot of my current frustrations around the use of these systems. Great article!
> An NBER study of support agents [2] found generative AI boosted novice productivity by about a third while barely helping experts. Harvard Business School researchers found the same pattern in consulting work [3].
The first work cited was a research study on GPT-3(!) from 2020. Which is a barely coherent model relative to today's SOTA.
The second HBS research study literally finds the opposite of what's claimed:
> we observed performance enhancements in the experimental task for both groups when leveraging GPT-4. Note that the top-half-skill performers also received a significant boost, although not as much as the bottom-half-skill performers.
Where bottom-half skilled participants with AI outperformed top-half skilled participants without AI. (And top-half skilled participants gained another 11% improvement when pared with AI). Again, GPT-4 model intelligence (3 years ago) is a far cry from frontier models today.
Seeing the idea explored in such depth is great, I really am concerned about this.
Solution: managers need to ask 'how does $THING_YOU_MADE actually work?'.
Pre-AI, it could be taken for granted that if someone was skilled enough to write complex code/documentation then they have a sound understanding of how it works. But that's no longer true. It only takes 5 minutes of questioning to figure out if they know their stuff or not. It's just that managers aren't asking (or perhaps aren't skilled enough to judge the answers).
On the issue of over-enthusiasm from upper management, this may be only temporary since it makes sense to try lots of new ideas (even the crazy ones) at the start of a technological revolution. After a while it will become clearer where the gains are and the wasteful ideas will be nixed.
Maybe this means AI has democratized Death Marches.
"A growing body of work calls this output-competence decoupling"
Given that I don't think he meant that there's a thing called "output competence," I think he meant "output/competence decoupling."
If I’m having it do stuff I’m unfamiliar with, it does tend to do better than I would or steer me at least in a direction I can be more informed about making decisions.
One day, 100 orders come in for you to update. The next day, you get 50 orders to update. Did your productivity just get cut in half? If you get 200 orders on the third day, did you just quadruple your productivity from the previous day?
IYKYK
No surprise! Do you rememeber agile? Sometimes it was pragmatically applied towards efficiency, sometimes it became a bullshit religion full of priest and ceremonies. And on i could go, with more examples, the gist stays the same : new tools, speed increase, faster crash or faster travel depends on the trajectory the company/team/project/thing was already on.
A special note on "People who cannot write code are building software." "Fuck yeah" to that! Devs has shipped bad software to people in other departements/domains, for ages. They would never build something better if what they had was good in the first place.
When we (coders/startups) were doing it it was "innovation", now is "elephants in the china shop"? And this is not a rethorical snappy question: that IS innovation, instead of critizing the "wrong schema" ... understand the idea, help build it and do the job: ship code that works and is safe.
Also, grey-beard here, pls, don't think you can ever have a stable job especially when code is around. It keeps changing, it always has, it always will. AI bringing unprecedented changes is hype. The world always changed fast.
If "you" picked software development because of salary, you are in danger. If you did it because you love it, then tell me with a straight face this is not one of the best moments to be alive.
AI promises "you don't even need to understand the problem to get work done!" But the problem is doing the work is the how I understand problems, and understanding the problem is the bottleneck.
I work in an "AI-first" startup. Being "The Expert", my work has become 90% reviewing the tons of crap that confident BD people now produce, pretending to understand stuff that has never been their domain, proudly showing off their 20-pages hallucinated docs in the general chat as the achievement of their life.
"Heads up folks, I wrote this doc! @OP can you review for accuracy and tone pls?"
And don't hit me with the smartass "just say no", it's not an option. I tried that initially. I have a pretty senior position in the org, I complained to the CTO which I report to, and with the BD managers as well, that I do not have bandwidth to review AI-produced crap. After a couple of weeks, CEO and leadership in an org call spelling out loud that "we should collaborate and embrace AI in all our workflows, or we will be left behind". They even issued requirements to write a weekly report about "how AI improved my productivity at work this week". Luckily I am senior enough to afford ignoring these asks, but I feel bad to all my younger colleagues, which are basically forced to train their replacements. I am not even sure at this point whether this is all part of the nefarious corporate MBA "we can get finally rid of employees" wet dream, whether it's just virtue-signalling to investors, or if CEO and friends genuinely believe their own words. I have the feeling leadership (not only in my org) has gone in AI-autopilot mode and just disappeared to the sunny tropical beaches they always wanted to belong to.
I would happily find another workplace at this point, but you know how the market is right now, and anyway, I have the feeling that this shit is happening pretty much anywhere money is.
Everyone feels smart now, and it's a curse.
God, how I hate this. It's making my life miserable.
This is not new. This happened with every new technology or paradigm change. The old norms take a while to adapt to the new world and it involves some pain, emitting writings like this one.
Impersonation by using abilities that are not biologically their own, has been the strategy of dominance for human race. Horse-riding knights with bows and arrows dominated other humans that didn't have horse or arrows.
What are you complaining about? Quality of the software produced? Quality of objectives? Here is the truth. None of that is the root goal. You need to change your assumptions and norms and root goals.
I, for one, welcome the new paradigm shift of vibe coders entering the field. I still think I have a competitive advantage with my 30+ years of coding experience, but I don't think it's wrong for vibe coders to enter my turf. I think value of code is rapidly asymptotically to ZERO. Code has no value anymore. It doesn't matter if it's slop as long as it works. If you are one of the ones that believes that all code written by humans is sacred and infallible, you probably don't have a lot of experience working in many companies. Most human code is garbage anyway. If it's AI-generated, at least it's based on better best principles and if it's really bad you just need to reprompt it or wait for a newer version of the AI and it will automatically get better.
THIS IS THE NEW PARADIGM. THINKING YOU HAVE ANY POWER TO SWAY THE FUTURE AWAY FROM THIS PATH IS FOOLISH.
I'm currently running a migration program at work and it turns out there's a 10 MB limit to the number of entries I can batch over at one time. At first I asked AI to copy 10 rows per batch but that was too slow. Then I asked it to change the code to do 400 rows per batch but sometimes it failed because it exceeded the 10 MB limit. Then I said just collect the number of rows until you get 10 MB and then send it off. This is working perfectly and now I'm running it without any hitches so far. Then I asked it to add an estimate to how long it would take to finish after every batch, including end time.
I really love this new world we're living in with AI coding. Sure this could have been done by someone without experience, but at least for right now the ideas I can come up with are much better than those without any experience, and that's hopefully the edge that keeps me employed. But whatever the new normal is, I'm ready to adapt.
> Schemes were all wrong
Why'd you let him run wild for two months? What software org would let anyone, even principle do that? Wouldn't the very first thing you'd do is review the guys schema? This reads like all the other snarky posts on HN about how everyone is punching above their pay grade and people who are much more advanced in some space just watch like two trains colliding.
I'll tell you what is productive in the workplace. Communication. That is it. Communicate and lift the guy up, give the guy a running start instead of chilling in the break room snarking with all your snarky co-workers.
The skin in the game one, in particular, is something I've been thinking about. People have been telling me LLMs are "more intelligent" than "average people". But it's easy to sound intelligent when you have no skin in the game. People have to stand by their word and suffer the consequences of their actions. It's not enough just to sound intelligent.
It seems appropriate also to share an anecdote of an incident that recently happened in my job. A colleague submitted some code for review, quite a lot of it. A second colleague reviewed and questioned a piece of code. Rather than answer the question with a justification, the question was taken rhetorically and the code was removed. The code then failed in production because the removed code was, in fact, necessary. The LLM obviously "knew" this, but neither colleague did. It's leading me to introduce a "no rhetorical questions in code review" rule. The submitter must be able to justify every line of code they submit.
And the worst offenders are those insisting this isn't the case.
I'm finding it difficult to agree on document creation now being zero cost whereas consumption is high cost. I think you can actually spend time giving AI enough context to consume docs for you.
I think the other thing worth pointing out with the article is understanding what your company will recognise. Yes, it's totally correct that your company won't thank you for poopoo-ing the idiot with AI. Yes, they'll run into a buzz saw when they hit a stakeholder who can choose to buy in. Don't burn your career protecting theirs. In fact it's not even certain that the idiot is damaging their career (for many reasons).
This was a really interesting article.
He was also had a serious case of cargo-cult mentality. He'd see some behavior and ascribe it to something unrelated, then insist with almost religious fervor that things had to be coded in a certain way. He was also a yes-man who would instantly cave to whatever whim management indicated. We'd go into a meeting in full agreement that a feature being requested was damaging to our users, and he'd be nodding along with management like a bobble-head as they failed to grasp the problem.
Management never noticed that he was constantly misleading other teams, or that he checked in flaky code he found on the Internet that triggered multiple days of developer time to debug. They saw him as a highly productive team player who was always willing to "help" others.
He ended up promoted to management.
Anyway, my point is that management seems to care primarily about having their ego boosted, and about seeing what they perceive as a hard worker, even if that worker is just spinning his wheels and throwing mud on everyone else. I'm sure that AI is only going to exacerbate this weird, counter-productive corporate system.
I've been on the receiving end of this and it sucks. It shows lack of care and true discernment. Then you push back and again, you're arguing with Claude, not the person.
I don't know what the solution is here. :(
---
> He produced a great deal of code, [...] He could not, when asked, explain how any of it actually worked. [...] When opinions were voiced even as high as a V.P., he fought back.
AI has democratized coding, but people have yet to understand that it takes expertise to actually design a system that can handle scale. Of course, you can build a PoC in a few hours with Claude code, but that wouldn't generate value.
The reason why we see such examples in the workplace is because of the false marketing done by CEOs and wrapper companies. It just gives people a false hope that "they can just build things" when they can only build demos.
Another reason is that the incentives in almost every company have shifted to favour a person using AI. It's like the companies are purposefully forcing us to use AI, to show demand for AI, so that they can get a green signal to build more data centers.
---
> So you have overconfident, novices able to improve their individual productivity in an area of expertise they are unable to review for correctness. What could go wrong?
This is one much-needed point to raise.
I have many people around me saying that people my age are using AI to get 10x or 100x better at doing stuff. How are you evaluating them to check if the person actually improved that much?
I have experienced this excessively on twitter since last few months. It is like a cult. Someone with a good following builds something with AI, and people go mad and perceive that person as some kind of god. I clearly don't understand that.
Just as an example, after Karpathy open-sourced autoresearch, you might have seen a variety of different flavors that employ the same idea across various domains, but I think a Meta researcher pointed out that it is a type of search method, just like Optuna does with hyperparameter searching.
Basically, people should think from first principles. But the current state of tech Twitter is pathetic; any lame idea + genAI gets viral, without even the slightest thought of whether genAI actually helps solve the problem or improve the existing solution.
(Side note: I saw a blog from someone from a top USA uni writing about OpenClaw x AutoResearch, I was like WTF?! - because as we all know, OpenClaw was just a hype that aged like milk)
---
> The slowness was not a tax on the real work; the slowness was the real work.
Well Said! People should understand that learning things takes time, building things takes time, and understanding things deeply takes time.
Someone building a web app using AI in 10 mins is not ahead but behind the person who is actually going one or two levels of abstractions deeper to understand how HTML/JS/Next.js works.
I strongly believe that the tech industry will realise this sooner or later that AI doesn't make people learn faster, it just speeds up the repetitive manual tasks. And people should use the AI in that regard only.
The (real) cognitive task to actually learn is still in the hands of humans, and it is slow, which is not a bottleneck, but that's just how we humans are, and it should be respected.
The over-production of documents is just one symptom. It's clear that organizations are struggling to successfully evolve in the era of worker 'superpowers'. Probably because change is hard!
Perhaps this is indicative of a failure of imagination as much as anything? The AI era is not living up to its potential if workers are given superpowers, but they are not empowered to use them effectively.
Empowered teams and individuals have more accountability and ownership of business outcomes - this points to a need for flatter hierarchies and enlightened governance, supported by appropriate models of collaboration and reporting (AI helps here too!).
In the OP article the writer IMHO reached the wrong conclusion about their colleague who built a system that didn't work - this sounds like the sort of initiative that should be encouraged, and perhaps the failure here points to a lack of technical support and oversight of the colleague's project.
Now more than ever organizations need enlightened leadership who have flexible mindsets and who are capable to envisioning and executing radicle organizational strategies.