There is just too much incentive to say... no, to BELIEVE... both that AI yields 10x productivity that AI is useless.
I am swinging wildly between the two too, personally. The more time I spend with AI, the more I am developing this split personality where one part of me says "I hope this thing blows up before I lose my job and my children never have the chance to have an office job again" and the other one says "AI is actually not easy! You have to know how to use it well, deveop tools, plan, curate your context... This means I am acquiring useful skills here, tring to port Flappy Bird to COBOL".
And obviously, depending which side controls my cortex in that moment, I may err on the "AI is useless crap" or the "AI all the things!" side
The headline number (12% of CEOs generating measurable returns) gets cited a lot, but I think the more revealing finding is the 56% with zero financial impact.
These are companies with enterprise AI budgets, dedicated teams, and access to every tool on the market and the majority are getting nothing back.
PwC calls it "Pilot Purgatory." The pattern: AI gets deployed in isolated, tactical projects that don't connect to revenue. internal tooling, content drafts, meeting summaries while the 12% they call the "Vanguard" are using AI in the product and customer experience itself (44% of Vanguard vs 17% of everyone else).
What I found interesting from a solo founder angle: the structural barriers causing large companies to fail at this “bureaucracy, legacy systems, misaligned incentives, multi-department approval processes” don't exist at the one-person scale.
The bottleneck for small operators is different: it's not knowing which workflows are worth building, in what order, and what "system-level" vs "task-level" use actually means in practice.
Curious if others have a take on why the enterprise failure rate is this high despite the investment, and whether the Vanguard pattern (AI into the product, not just the back office) matches what people are seeing in practice.
AI adoption and Solow's productivity paradox
This is a lie. It can't be zero. It is negative.
The vast majority of people I'm coming across, both online and here, where I live, have absolutely no knowledge or understanding of how to work with AI.
From Perplexity/Sonar and GPT5 I've learned that most people do not treat it like an intelligence, they treat it like a search engine with better text output.
This article reminded me of that.
I find it extremely inaccurate to claim that the issue with big companies is structure, because that - as happens far too often - ignores the root cause:
The people in charge, who don't make the necessary smart and radical-seeming decisions.
I know it's nowadays rather unpopular to point at actual, real shortcomings of people, but that's how it is. Someone, at some point, made dumb decisions or failed to make smart decisions.
"Let's put humanities greatest invention, a functional artificial intelligence, to the task of doing paperwork."
Why aren't they making smart decision? Well ... because they can't!
It's not about structure, it's about the failure to recognize potential and ability. When you're the boss, then you make decisions which make things happen.
They can make dumb decisions, like using AI solely for paperwork, or they can make smart decisions, like causing changes in the company that enable the gigantic potential.
Or, in other words:
Handing a monkey a book doesn't magically make the monkey grasp the power it's holding in its hands.
> Not because you have more resources. Because you have fewer barriers.
No. It's all about decisions, decision-making and the ability to make smart decisions. When you're the person who makes the decisions, then you can take down the barriers, work around them or at least start trying figuring out how to do so. Everything else is just excuses.
Barriers don't make decisions. People do. The barriers exist in their heads more than anywhere else. When you're incapable of making smart decisions, then the problem is you.