Code takes 6-12 months to make it from commit to production. Development speed was never the bottleneck; it's all the other processes that take time: infra provisioning, testing, sign-offs, change management, deployment scheduling etc.
AI makes these post-development bottlenecks worse. Changes are now piling up at the door waiting to get on a release train.
Large enterprises need to learn how to ship software faster if they want to lock in ROI on their token spend. Unshipped code is a liability, not an asset.
AI/LLMs aren't innovation the way TCP/IP, linux, or postgres were. To be clear: claude/codex/gemini/grok/whatever exist for profit, to squeeze the last drop of productivity out of you until there's nothing left, and then you're disposable (laid off).
If you like AI, use open source models, use them in your side projects.
My company set up a “prompt of the week” award and brown-bag sessions to help spread adoption. We also have teams meant to develop these workflows. Clearly, they set these events up to play it off as their own productivity. Without a real (read “monetary”) incentive or job security, the risk and cost of spreading the knowledge falls squarely on the developer.
At what point is inspiration and thought just devalued and worthless in the name of doing things instantly. The work has no soul.
It really comes into its own when you treat it as a tool that can build other tools. For example, having it build tools that force it to keep going until its work reaches a certain quality, or runs compliance checks on its outputs and tells it where it needs to fix things. Then and only then, can you trust its work.
Right now most current roles & workflows are designed around wrangling the tools you’re given to do a certain job. In that regime AI can only slide in at the edges.
The CEO has a youtube style platinum token plaque for their office.
In the old model, performance and OKRs were anchored in disciplines, job titles, and role-specific expectations. In the AI era, those boundaries are starting to collapse. The deeper issue is psychological and organizational: people are constantly negotiating the line between “this is my job” and “this is not my responsibility.”
That creates a key adoption problem: what is the upside of being visibly recognized as an expert AI user? If people learn that I can do faster, better, and more cross-functional work, why would I reveal that unless the company also creates a clear system for recognition, compensation, or career growth?
We are definitely struggling with the same issues author describes, but even worse the leaders down at the Crowd level have some perverse need to achieve reuse across their teams, rather than letting their Crowd experiment. One team does something interesting, we must stop and get that thing out to all teams in that group, so everyone “benefits”. This is a scarcity mindset, which made sense pre-AI where code was costly and ideas were more valuable.
At the same time, everyone not only has to do their work, they need to be 25% more efficient from AI (new KPIs), and so their own learnings slow to a halt, and the team with the cool idea has to give presentations instead of hacking.
The bias in the assumptions here is absolutely bonkers.
Problem: GenAI is not generating any visible return on investment.
"Solution": rearrange your entire development organization around the technology and start inventing new tooling.
What's entirely obvious is that the point of such articles is not the stuff they purportedly discuss, but the normalization of assumptions those discussions are based on.
But the internet was a simpler concept for businesses. Basically it was you can now sell to people from their computers. AI’s promise is what? It can approximate reasoning about things? This is much more challenging implementation puzzle to truly solve.
I don’t know that I’ve seen anything of real substance outside coding tasks yet.
While I do believe higher developer productivity can lead to faster reacting to market forces or more A/B testing, that won't necessarily lead to a successful business. Because ultimately it rarely is the software that's the issue there.
I propose employees create self-training byproducts as a result of any AI interaction. And then they also work with their Cuban manager to make sure that these self-training byproducts are a part of their growth plan. This can guarantee growth without losing that opportunity To interact with the intelligent AI system (on topics that are relevant to the company's short, mid, and long-term strategic advantage,).
It already has; ship has sailed.
https://blog.pragmaticengineer.com/the-pulse-tokenmaxxing-as...
I'm staunchly pro-AI as a technology, but I do think the bubble is going to pop in the next year or two just because the business value won't materialize for most companies fast enough.
AI content has a look and feel people sense immediately.
It’s amazing to see how quickly things shifted from “wow this is so cool, AI is going to change everything” to folks calling out “you lazy bum, this just looks like some slop you threw together with AI… let’s get some real thinking please.”
We are firmly heading into “trough of disillusionment” territory on the hype cycle.
> I do not want to make this a cost panic story, that would be the least interesting way to think about “rented intelligence”. The question is not how to minimize token spend in the abstract, any more than the question of software delivery was ever how to minimize keystrokes.
If tokens were as cheap as keystrokes -that is, effectively free- then "How do we minimize token spend?" wouldn't be a question that anyone asks. It's because keystrokes are effectively free that you only ask "How do we minimize the number of keys pressed during the software development process?" if you're looking for an entertaining weekend project. If keystrokes cost as much per unit of work done as the -currently heavily subsidized- cost of tokens from OpenAI and Anthropic, you'd see a lot of focus on golfing everything under the sun all the damn time.
Our mental models of developments like the industrial revolution, literacy, printing or suchlike tend to be a lot more straightforward than how things play out in practice.
When a bottleneck is eliminated... you tend to shortly find the next bottleneck.
Meanwhile, there is an underlying assumption everyone seems to make that "more software, more value" is the basic reality. But... I'm skeptical.
To do lists, wishlists, buglists and road maps may be full of stuff but...
Visa or Salesforce have already exploited all their immediate "more software, more money" opportunities.
The ones in a position to easily leverage AI are upstarts. They're starting with nothing. No code. No features. No software. With Ai, presumably, they can produce more software and make value.
Also... I think overextended market rationalism leads people to see everything as an industrial revolution...which irl is much more of an exception.
The networked personal computing revokution put a pc one every desk. It digitized everything. Do we have way better administration for less cost? Not really. Most administrations have grown.
Did law fundamentally change dues to dugital efficiency? No. Not really.
If you work on a terrible enterprise codebase... it's very possible that software quality/quantity isn't actually that important to your organization.
While AI tools have been provided pretty quickly (over a year ago, I initially used gemini cli, then copilot once it added anthropic models) the management is absolutely clueless about it.
The top wants agents. Every team is asked few times a week "what autonomous agents will you build next" and answers the current AI lacks agency required not to mess up critical long running tasks and generate even more work are falling on deaf ears.
(also ideas such as, why don't we setup a wiki page were teams can post their repetitive tasks and we can use AI to script them, are considered "not fast enough" - just build it... but we are the automation team, we automated everything we do years ago :-)
Middle managers on the other hand suddenly started giving juniors senior's work and asking seniors "tell them (juniors) how to prompt it".
Seriously? How about I prompt it myself instead? Oh, but it makes a shit load of architectural errors and boobie traps the junior will fail to find... So now instead of a cursory glance I have to spend an hour reviewing a small PR from them.
And any questions about "why are you creating a new X for this instead of extending the existing one?" are met with blank panicked stares...
The essence of this BS is contained in my description of the recent "Copilot Review" incident.
We sometimes merge the same Github workflow files (10 line files) to dozens of repos, we have to obtain approvals for the PRs from a bunch of teams working in different timezones, but the merge has to be done everywhere at once and it has to be coordinated with other work.
As we were on a day of such task some "helpful hand" enabled Copilot PR reviews for the whole org.
Copilot helpfully opened 7 or 8 discussions on each PR giving us such precious advice as "your concurrency group uses the commit sha as a differentiating factor, this will allow multiple runs to proceed concurrently" to which one is tempted to answer "no shit sherlock".
We suddenly had almost 200 conversations to "resolve" an hour before the merge and a bunch of approvers didn't give their approvals because "there is a discussion".
Thankfully we had copilot that wrote us a script in 5 minutes to resolve the problem caused by itself...
Maybe our next overnight agent can go over all our open PRs and close Copilot Review conversations with appropriate messages?
This is just sales copy for various AI companies, laundered through an "influencer". It might as well be the CIA sending their article to be published in Daily Post Nigeria, so that the NYT can quote it as "sources".
The title is just clickbait. The rest of the content are fluffy bunnies and rainbows. It's all summed up as "continue to consume product, but remember to also do X". Sales copy + HBR MBA bait.
The closest thing to an honest, less-than-rosy example is the "junior person" who has no idea about the code they committed.
What about the "senior person" who has no idea about the code they committed? What about the CISO who doesn't understand that pasting proprietary documents willy nilly into the LLM's gaping maw might have legal/security/common sense implications, and that it is his job to set policy on such behavior? What about the middle manager who doesn't even try to retain the most experienced dev in the company because "we don't need the headcount anymore, now that Claude is so fast"? What about the company eating its own seed corn because every single junior position has been eliminated and there are no plans for the future anymore? What about the filesystem developer who fell in love with his chatbot girlfriend and is crashing out on Discord?
Oh wait, scratch that last one. He left the company and is crashing out on his own.
Carry on, then.
Not a problem if the hired "AI" now does that job. /i