It doesn't help that the west has a clear bias wherein moving "up" is moving away from the work. Many executives often don't know what good looks like at the detail level, so they can't evaluate AI output quality.
I think that the simple explanation for why executives are so hyped about AI is simply that they're not familiar with its severe current limitations. For example, Garry Tan seems to really believe he's generating 10KLOC of working code per day; if he'd been a working developer he would have known he isn't.
ICs dislike this because it raises expectations and puts the spotlight on delivery velocity. In a manufacturing analogy, it’s the same as adding robots that enables workers to pack twice as many pallets per day. You work the same hours, but you’re more tired, and the company pockets the profits.
Software Engineers are experiencing, many for the first time in their careers, what happens when they lose individual bargaining power. Their jobs are being redefined, and they have no say in the matter - especially in the US where “Union” is a forbidden word.
ICs worry about doing their job (either doing it well because they care about their craft, or doing it good enough because they need to pay bills). AI doesn't really promise them anything. Maybe they automate some of their tasks away, but that just means they will take on more tasks. For practically any IC, there is no increase in wealth nor reduction in labor time. There is only a new quiet lingering threat that they might be laid off if an executive determines they're not needed anymore.
That's the difference in enthusiasm about AI.
Thoughts and idea as in "I will implement this in this structure, with these tradeoff, and it will work with these 4 APIs and have no extra features and here's how I'm (or LLM with tools is) going to run it and test it".
Thoughts and idea not as in "build facebook" - a lot of people think AI can do that, it won't (but might pretend to) and it will just lead to failure.
My competitive edge did not diminish, it expanded.
That said, the central point of the TFA is spot-on, though it could be made more generally, as it applies to engineering as well as management: uncertainty rises sharply the higher you climb the corporate and/or seniority ladder. In fact, the most important responsibility at higher levels is to take increasing ambiguity and transform it into much more deterministic roles and tasks that can be farmed out to many more people lower on the ladder.
The biggest impact of AI is that most deterministic tasks (and even some suprisingly ambiguous ones) are now spoken for. This happens to be at the bread and butter of the junior levels, and is where most of the job displacement will happen.
I would say the most essential skill now is critical thinking, and the most essential personality trait is being comfortable with uncertainty (or as the LinkedInfluencers call it, "having a growth mindset.") Unfortunately, most of our current educational and training processes fail to adequately prepare us for this (see: "grade inflation") so at a minimum the fix needs to start there.
I seriously doubt Satya Nadella is sitting down for hours a day to use Copilot to draft detailed documents. He's being fed fantastical stories by his lackeys telling him what he wants to hear.
But I will insist that executives are more driven by FOMO than a teenager.
If you are not, you either have a boring job or do not have any ideas that are worth prototyping asynchronously. Or haven't tried AI in the last ~3 months.
For non-technical, the current meteoric rise of AI is due to the fact that AI is generally synonymous to "it can talk". It has never _really_ spoken to the wider audience that the image recognition, or various filters, or whatever classifiers they could have stumbled upon are AI as well. What we have, now, is AI in the truest sense. And executives are primarily non-technical.
As for the technical people, we know how it works, we know how it doesn't work, and we're not particularly amused.
Executives do not need actively functional systems from AI to help with their own daily work. Nothing falls over if their report is not quite right. So they are seeing AI output that is more complete for their own purposes.
But also, AI is good enough to accelerate software engineering. To the degree that there are problems with the output, well, that's why they haven't fired all the the engineers yet. And executives never really cared about code quality -- that is the engineers' problem.
What I'm trying to build for my small business client right now is not engineering but still requires some remaining employees. He's already automated a lot of it. But I'm trying to make a full version of his call little center that can run on one box like an H200. Which we can rent for like $3.59/hr. Which if I remember correctly is approximately the cost of one of his Filipino employees.
Where we are headed is that the executives are themselves pretty quickly going to be targeted for replacement. Especially those that do not have firm upper class social status that puts them in the same social group as ownership.
It’s like Marc Andressen bloviating about how AI will replace everyone except him.
To be fair, some of this is understandable. At some level, you’re just going to see some things as a bullet point in a daily/monthly/quarterly report and possibly a 10 minute presentation. You’re implicitly assuming that the folks under you have condensed this information into something meaningful.
On top of that, places like Amazon extol the virtues of only working on projects that can be completed with entirely fungible staffing and Google tries ever so hard to electroplate this steaming turd of an ideology with iron pyrite calling fungibles "generalists."
So along comes AI coding agents, which I love as an IC because it excels at tedious work I'd rather not have to do in the first place, yet I get why others see it as a threat. But I really think it's no more of a threat than any other empty promise to cut costs with the silver bullet of the month and we just have to let the loudmouths insist otherwise until the industry figures out this isn't a magic black box. They never learn, do they? Maybe their jobs depend on never learning.
e: typo
Meanwhile executives see the money related numbers go up.
Ha! Apparently the author hasn't been asked "how long will it take to code this?" yet... And isn't a common developer complaint that management does not know how to evaluate them, and substitutes things like how quickly a task gets completed, with the result that some guy looks amazing while his coworkers get stuck with all his technical debt?
In my systems programming job ICs have mostly avoided it because we don't have time to learn a new thing with questionable benefits. A lot of my team are really, really good programmers and like that aspect of the job. They don't want to turn any part of it over to a machine. Now if a machine could save us from ever dealing with Jira...
That said, I have begun using AI for some things and it is starting to be useful. It's still 50/50 though, with many hallucinations that waste time but some cases where it caught very simple bugs(syntax or copy/paste errors). I think the experience of, say, systems programmers is very different vs python/web folks though. AI does a great job for my helper scripts in Python.
Management needs to take their own medicine though. They continue to refuse to leverage AI to do things it could actually be good at. I give a duplicate status to management 3x/week now. Why? AI could handle tracking and summarizing it just fine. It could also produce my monthly status for me.
Curious how you verify this behavior would be unique to the West?
- You ask someone to do it
- You check their work and they made some mistakes, but it's good enough to use
- You ultimately don't know if they're doing the best at their job but you have regular performance check-ins to be safe
As ICs we can complain all we want about the quality of AI, but as far as your manager goes - you using AI is not that much different to them having an employee.
It makes me think of an executive I once reported to who “increased velocity” by changing the utilization rate on a spreadsheet from 75% to 80%.
embedded/cloud/IoT --> AI --> quantum…
When the company originally known as C3 Energy changes their name to C3.quantum, you'll know where on to the next buzzword.
For executives, that's writing code. For ICs, it's other stuff.
I’m neither a developer nor an executive, but from my vantage point the software crisis has to do with the fact that software development presents an existential risk to any organization that engages in it. It seems to be utterly resilient to estimation, and projects can run late by months or even years with no good explanation except “it’s management’s fault.” This has been discussed at length. If I had a good answer, “I wouldn’t still be working here” as the saying goes. But half a century after The Mythical Man Month, it still reads like it was written yesterday, and “no silver bullets” seems to ring true.
In my view, the software crisis will be resilient. Throwing more code, or more code per day, at a late project will make it later. There will be a grace period while the pace of coding seems exciting, but then the reality will set in: “We haven’t shipped a product.” And it will be management’s fault.
Not even sure if determinism is a good axis to analyze this problem. Also smells extremely like concept creep - do you mean "moving up the abstraction stack" as "non determinism" too?
Executives see this as way to replace labor.
The labor sees themselves being replaced.
This is a story as old as the hills.
When you analyze this as "Management loves AI" and "workers hate it" goes completely back to 'who owns the means of production?', and can be clearly seen within Marx's critique.
Narrator: there is not
But because time is money, I think all the benefits go to the dev. The exec still needs the dev regardless
It accomplished this not simply by eliminating my overpaid bullshit job as parasite attractor; but by putting an end to its pathetic semblance of a premise: building software to be used by, uh, someone? for, uh, something?
The various entities requesting the work (or, in later years, the layers of barely-sentient intermediaries between me and said entities) were hardly if ever clear on how exactly this was supposed to produce value; but now they're free, too! Free from having to even try to understand how answering that question is relevant, emdash - so in the end it worked out for them as well!
I am finally at liberty to do something worthwhile with my life, and while at this point I realize it'll take me some time to even remember what "worthwhile" even was (or whether such a thing still exists in your imaginary world of personalized sensory bubbles), I do sleep a rich REM sleep knowing society is now capable of digging its own grave without my assistance. Seriously, I was looking at my bank account and getting a little worried.
I am told that mine is a minority position: if you happen to be the kind of person who believes that more is better, no matter more of what, rest assured you and your eventual progeny will be quite safe - for a while, anyway - in your new role as AI trainer (or is it AI fodder, let's let the market decide!)
Well, turns out when we are all busy looking the part, it becomes impossible for anyone to actually play the part; but also nobody notices, so this is fine too!
Just one request on my part: if possible, do shut up while figuring out how to better turn yourself and our world into paperclips, alright? Besides the ones that you recognize as people, a whole bunch of other people do live on this here planetation - and I hear they find all the AI blather to be mighty annoying.