Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.
The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.
And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.
* If I don't know how to do something, llms can get me started really fast. Basically it distills the time taken to research something to a small amount.
* if I know something well, I find myself trying to guide the llm to make the best decisions. I haven't reached the state of completely letting go and trusting the llm yet, because the llm doesn't make good long term decisions
* when working alone, I see the biggest productivity boost in ai and where I can get things done.
* when working in a team, llms are not useful at all and can sometimes be a bottleneck. Not everyone uses llms the same, sharing context as a team is way harder than it should be. People don't want to collaborate. People can't communicate properly.
* so for me, solo engineers or really small teams benefit the most from llms. Larger teams and organizations will struggle because there's simply too much human overheads to overcome. This is currently matching what I'm seeing in posts these days
Dateline ~2010. Location: NYC Why:Indian outsourced shops.
Now the zinger, dear hn, is this: He actually said to us (we ran a more boutique consulting firm) that "everything has to be done 3 times" and "their work is crap". But "we're getting rid of this floor".
That, imho, was due to geopolitical machinations of inducing India to become part of the West. The immediate equation of "money for quality work" wasn't working but the 'our higher ups' had more grand plans and sacrificing and gutting the IT industry in US was not a problem.
So, given the incentives these days, do not remotely pin your hopes on what these CEOs are saying. It means nothing whatsoever.
- reviews for code
- asking stakeholders opinions
- SDLC latency (things taking forever to test)
- tickets
- documentations/diagrams
- presentations
Many of these require review. The review hell doesn't magically stop at Open source projects. These things happen internally too.
Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.
Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.
What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted to LLM takeover.
But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.
Making it easier/better just means more/higher quality “worthless” work is performed. The incentives in the not-directly -productive parts of organizations are to keep busy and maintain a stream of signals of productivity. For this , AI just raises the bar. The 25% of the work that -is- important to producing economic value just gets reduced to 15%.
The workforce in large orgs that is most AI adjacent is already idling along in terms of production of direct economic value. Making them 10x more productive in nonproductive work will not impact critical metrics in a short timeframe.
It’s worth noting that these “not directly productive” activities actually can (and often do) produce value, eventually. Things like brand identity, culture, and meta-innovation, vision (search-space) are intangibles that present as cost centers but can prove invaluable in longer timescales if done right.
As a CEO I see it as a massive clog up of vast amounts of content that somebody will need to check. A DDoS of any text-based system.
The other day I got a document of 155 pages in Whatsapp. Thanx. Same with pull requests. Who will check all this?
Figure A6 on page 45: Current and expected AI adoption by industry
Figure A11 on page 51: Realised and expected impacts of AI on employment by industry
Figure A12 on page 52: Realised and expected impacts of AI on productivity by industry
These seem to roughly line up with my expectations that the more customer facing or physical product your industry is, the lower the usage and impact of AI. (construction, retail)
A little bit surprising is "Accom & Food" being 4th highest for productivity impact in A12. I wonder how they are using it.
Could it be that employers are not seeing the difference because most employees are doing something else with the time they've saved by using AI?
There's been massive wage stagnation, benefits are crap, they play games with PTO. Most people I talk to who use AI as a part of their workflow are taking advantage of something nice that has come their way for a change.
I do think we are on the verge of something tho. Once the compounding effect happens in the world of atoms (recursive robotics), it's over.
But really, are CEO's the best people to assess productivity? What do they _actually_ use to measure it? Annual reviews? GTFO. Perhaps more importantly, it's not like anything a C-level says can ever be taken at face value when it involved their own business.
The firmwide AI guru at my shop who sends out weekly usage metrics and release notes started mentioning cost only in the last few weeks. At first it was just about engaging with individual business heads on setting budgets / rules and slowing the cost growth rate.
A few weeks later and he is mentioning automated cost reporting, model downgrading and circuit breaking at a per-user level. The daily spend where you immediately get locked within 24 hours is pretty low.
Once the tools help the AI to get feedback on what its first attempt got right and wrong, then we will see the benefits.
And the models people use en masse - eg. free tier ChatGPT - need to get to some threshold of capability where they’re able to do really well on the tasks they don’t do well enough on today.
There’s a tipping point there where models don’t create more work after they’re used for a task, but we aren’t there yet.
And the biggest irony is that the "scariest" projects we had at our university ended up being maybe 500-1000 lines of code, things really must go back to hands on programming with real time feedback from a teacher. LLM's only output what you ask and won't really suggest concepts used by professionals unless you go out of your way to ask for it, it all seems like a vicious cycle even though meaningful code blocks can range along 5 to 100 lines which. When I use LLM's I just get information burn out trying to dig through all that info or code
As tech become available to help reduce your costs and drive up your profit, the same tech also reduces your competitor's costs and perhaps lets more competitors into the market. This drives down your product prices and reduces your profit.
So you invest but see no increase in productivity, but if you don't do it - you're toast.
However, there's another factor. The J-curve for IT happened in a different era. No matter when you jumped on the bandwagon, things just kept getting faster, easier, and cheaper. Moore's law was relentless. The exponential growth phase of the J-curve for AI, if there is one, is going to be heavily damped by the enshitification phase of the winning AI companies. They are currently incurring massive debt in order to gain an edge on their competition. Whatever companies are left standing in a couple of years are going to have to raise the funds to service and pay back that debt. The investment required to compete in AI is so massive that cheaper competition may not arise, and a small number of (or single) winner could put anyone dependent on AI into a financial bind. Will growth really be exponential if this happens and the benefits aren't clearly worth it?
The best possible outcome may be for the bubble to pop, the current batch of AI companies to go bankrupt, and for AI capability to be built back better and cheaper as computation becomes cheaper.
> My own updated analysis suggests a US productivity increase of roughly 2.7 per cent for 2025. This is a near doubling from the sluggish 1.4 per cent annual average that characterised the past decade.
good for 3 clicks: https://giftarticle.ft.com/giftarticle/actions/redeem/97861f...
https://www.wsj.com/video/erik-brynjolfsson-productivity-is-...
When you actually talk to people about what they do there are often many, many nuances, micro-events, micro-decisions and micro-actions in their work. This is why it can take days/weeks/months to completely train a new person for a job.
This level of detail is barely documented - anywhere. There is a huge amount of information buried in workflows that AI has barely had access to for training. A lot of this is more in the realm of world models, rather than LLMs.
So imagine trying to use AI to improve these workflows it knows so little about. Then imagine AI trying to reinvent them across an organization.
We find these use cases where AI provides great value - totally true - but these barely scratch the surface of what goes on.
- measuring productivity
- adapting to change
This article just reinforces that. Past a certain headcount, executives have little to no understanding of what IC day-to-day is like.
AI tooling doesn't fix the bureaucracy the c-suite helped to create.
Of course this doesn't take into account people who just pay to play around and learn, non professional use cases, or a few other things, but it's a rough ballpark estimate.
Assuming the above, current AI models would only increase the productivity for most workplaces by a relatively small amount, around 10-200 € per employee per month perhaps. Almost indistinguishable compared to salaries and other business expenses.
That multiple AI agents can now churn out those lines relatively nearly instantly, and yet project velocity does not go much faster, should start to make people aware that code generation is not actually the crucial cost in time taken to deliver software and projects.
I ranted recently that small mob teams with AI agents may be my view of ideal team setup: https://blog.flurdy.com/2026/02/mob-together-when-ai-joins-t...
I am glad to see articles like this that evaluate impact, but I wish the following would get more public interest:
With LLMs we are chasing sort-of linear growth in capability at exponential cost increases for power and compute.
Were you mad when the government bailed out mis-managed banks? The mother of all government bailouts might be using the US taxpayer to fund idiot companies like Anthropic and OpenAI that are spending $1000 in costs to earn $100.
I am starting to feel like the entire industry is lazy: we need fundamental new research in energy and compute efficient AI. I do love seeing non-LLM research efforts and more being done with much smaller task-focused models, but the overall approach we are taking in the USA is f$cking crazy. I fear we are going to lose big-time on this one.
Personally I have noticed strange effects, where I previously would have reached for a software package to make something or solve an issue, its now often faster for me to write a specific program just for my use case. Just this weekend I needed a reel with a specific look to post on instagram but instead of trying to use something like after effects, i could quickly cobble together a program that was using css transforms that outputted a series of images I could tie together with ffmpeg.
About a month ago I was unhappy with the commercial ticketing systems, they were both expensive and opaque so I made my own. Obviously for a case like that you need discipline and testing when you take peoples money, so there was a lot of focus on end to end testing.
I have a few more examples like this, but to make this work you need to approach using LLMs with a certain amount of rigour. The hardest part is to prevent drift in the model. There are a certain number things you can do to make the model grounded in reality.
When the tool doesn’t have a reproducer, it’ll happily invent a story and you’ll debug the story. If you ground the root cause in for example a test, the model can get context enough to actually solve the problem.
Another issue is that you need to read and understand code quickly, but its no real difference from working with other developers. When tests are passing I usually do a PR to myself and then review as I usually would do.
A prerequisite is that you need tight specs, but those can also be generated if you are experienced enough. You need enough domain intuition to know what ‘done’ means and what to measure.
Personally I think the bottleneck will go from trying to get into a flow state to write solutions to analyze the problem space and verification.
If you've already got a very effective team with clear vision/goals, this technology will almost certainly help to some degree.
If you've got a sinking ship of a business, this technology will likely drag you down faster.
You always have to work backward from the customer into the technology. AI will never change that. I've found myself waffling on advice to some clients regarding AI because whether or not they can effectively leverage it depends more on what the people in the business are willing to do than what the technology can do.
Until the handoff tax is lower than the cost of just doing it yourself, the ROI isn't going to be there for most engineering workflows.
CEOs are now on the downside of the hype curve.
They went from “Get me some of that AI!” after first hearing about it, to “Why are we not seeing any savings? Shut this boondoggle down!” now that we’re a few years into bubble, the business math isn’t working, and they only see burning piles of cash.
The non code parts (about 90% of the work) is taking the same amount of time though.
* AI is doing real work
* Humans using AI don't seem to get more done with AI than without
There is a huge economic pressure to remove humans and just let the AI do the work without them as soon as possible.
Or even the simple utility of having a chatbot. They’re not popular because they’re useless
Which to me says it’s more likely that people under estimate corporate inertia
Yeah, if your Fortune 500 workplace is claiming to be leveraging AI because it has a few dozen relatively tech illiterate employees using it to write their em dash/emoji riddled emails about wellness sessions and teams invites for trivia events… there’s not going to be a noticeable uptick in productivity.
The real productivity comes from tooling that no sufficiently risk adverse pubco IS department is going to let their employees use, because when all of their incentives point to saying no to installing anything ever, the idea of giving the permissions required for agentic AI to do anything useful is a non-starter.
Maybe this bothers me more than it should.
Then I started working on some basic grpc/fullstack crap that I absolutely do not care about, at all, but needs to be done and uses internal frameworks that are not well documented, and now Claude is my best friend at work.
The best part is everyone else’s AI code still sucks, because they ask it to do stupid crap and don’t apply any critical thinking skills to it, so I just tell AI to re-do it but don’t fuck up the error handling and use constants instead of hardcoding strings like a middle schooler, and now I’m a 100x developer fearlessly leading the charge to usher in the AI era as I play the new No Man’s Sky update on my other PC and wait for whatever agent to finish crap.
So I’m not even in the “it’s useless” camp, but it’s frankly only situationally useful outside of new greenfield stuff. Maybe that is the problem?
There are some real changes in day to day software development. Programmers seem to be spending a lot of time prompting LLMs these days. Some more than others. But the trend is pretty hard to deny at this point. That snowballed in just 6-7 months from mostly working in IDEs to mostly working in Agentic coding tools. Codex was barely usable before the summer (I'm biased to that since that is what I use but it wasn't that far behind Claude Code). Their cli tool got a lot more usable in autumn and by Christmas I was using it more and more. The Desktop app release and the new model releases only three weeks ago really spiked my usage. Claude Code was a bit earlier but saw a similar massive increase in utility and usability.
It is still early days. This report cannot possibly take into account these massive improvements that hav been playing out over essentially just the last few months. This time last year, Agentic coding was barely usable. You had isolated early adopters of Claude Code, Cursor, and similar tools. Compare to what we have now, these tools weren't very good.
In the business world things are delayed much more. We programmers have the advantage that many/most of our tools are highly scriptable (by design) and easy to figure out for LLMs. As soon as AI coders figured out how to patch tool calling into LLMs there was this massive leap in utility as LLMs suddenly gained feedback loops based on existing tools that it could suddenly just use.
This has not happened yet for the vast majority of business tools. There are lots of permission and security issues. Proprietary tools that are hard to integrate with. Even things like wordprocessors, spreadsheets, presentation tools, and email/calendar tools remain poorly integrated. You can really see Apple, MS, and Google struggle with this. They are all taking baby steps here but the state of the art is still "copy this blob of text in your tool". Forget about it respecting your document theme, or structure. Agentic tool usage is not widely spread outside the software engineering community yet.
The net result is that the business world still has a lot of drudgery in the form of people manually copying data around between UIs that are mostly not accessible to agentic tools yet. Also many users aren't that tool savvy to begin with. It's unreasonable to expect people like that to be impacted a lot by AI this early in the game. There's a lot of this stuff that is in scope for automating with agentic tools. Most of it is a lot less hard than the type of stuff programmers already deal with in their lives.
Most of the effects this will have on the industry will play out over the next few years. We've seen nothing yet. Especially bigger companies will do so very conservatively. They are mostly incapable of rapid change. Just look at how slow the big trillion dollar companies are themselves with eating their own dog food. And they literally invented and bootstrapped most of this stuff. The rest of the industry is worse at this.
The good news is that the main challenges at this point are non technical: organizational lag, security practices, low level API/UI plumbing to facilitate agentic tool usage, etc. None of this stuff requires further leaps in AI model quality. But doing the actual work to make this happen is not a fast process. From proof of concept to reality is a slow process. Five years would be exceptionally fast. That might actually happen given the massive impact this stuff might have.
In the past 6 months, I've gone from Copilot to Cursor to Conductor. It's really the shift to Conductor that convinced me that I crossed into a new reality of software work. It is now possible to code at a scale dramatically higher than before.
This has not yet translated into shipping at far higher magnitude. There are still big friction points and bottlenecks. Some will need to be resolved with technology, others will need organizational solutions.
But this is crystal clear to me: there is a clear path to companies getting software value to the end customer much more rapidly.
I would compare the ongoing revolution to the advent of the Web for software delivery. When features didn't have to be scheduled for release in physical shipments, it unlocked radically different approaches to product development, most clearly illustrated in The Agile Manifesto. You could also do real-time experiments to optimize product outcomes.
I'm not here to say that this is all going to be OK. It won't be for a lot of people. Some companies are going to make tremendous mistakes and generate tremendous waste. Many of the concerns around GenAI are deadly,serious.
But I also have zero doubt that the companies that most effectively embrace the new possibilities are going to run circles around their competition.
It's a weird feeling when people argue against me in this, because I've seen too much. It's like arguing with flat-earthers. I've never personally circumnavigated Antarctica, but me being wrong would invalidate so many facts my frame of reality depends on.
To me, the question isn't about the capabilities of the technology. It's whether we actually want the future it unlocks. That's the discussion I wish we were having. Even if it's hard for me to see what choice there is. Capitalism and geopolitical competition are incredible forces to reckon with, and AI is being driven hard by both.
Quickly slapping "AI features" on a bunch of existing products -- like almost every SW company seems to have done in an effort to appear "on the cutting edge" -- accomplishes almost nothing.
What is Ai missing that will make it useful to everyone?
Access to capital for everyone else is dropping. And the US economy is being managed by chaos monkeys, causing all kinds of supply chain disruptions. Oligopolies in almost every market are increasingly jacking up prices above market equilibrium rates as they are emboldened by a corrupted FTC.
Despite what Peter Thiel may have led you to believe, Monopolies are not healthy for an economy in aggregate.
Of course the economy is slowing.
I bet many CEO PA are using AI for many tasks. It's typically a role where AI is very useful. Answering emails, moving meetings around, booking and buying a bunch of crap.
Unfortunately I think most of the stuff they make will be shit, but they will build it very productively.
No. BOON. A BOON to workplace productivity.
And then the writer doubles down on the error by proving it was not a typo, ending the sentence with "...was for several years a bust."
I think in retrospect it's going to look very silly.