Anyway, By cutting out AI for most of my stuff, I really improved my well-being. I found the joy back in manual programming, because I am one of the few soon that will actually understand stuff :-). I found the joy in writing with a fountain pen in a notebook and since then, I retain so much more information. Also a great opportunity for the future, when the majority will be dumbed down even more. And for philosophical interaction. I joined an online University and just read the actual books of the great thinkers and discuss them with people and knowledgable teachers.
For what I use AI still is to correct my sentences (sometimes) :-).
It's kinda the same than when I cut all(!) Social Media a while ago. It was such a great feeling to finally get rid ot all those mind-screwing algorithms.
I don't blame anyone if they use AI. Do what you like.
Adoption rate = first derivative
Flattening adoption rate = the second derivative is negative
Starting to flatten = the third derivative is negative
I don't think anyone cares what the third derivative of something is when the first derivative could easily change by a macroscopic amount overnight.
I had fun with that one getting GPT-5 and ChatGPT Code Interpreter to recreate it from a screenshot of the chart and some uploaded census data: https://simonwillison.net/2025/Sep/9/apollo-ai-adoption/
Then I repeated the same experiment with Claude Sonnet 4.5 after Anthropic released their own code interpreter style tool later on that same day: https://simonwillison.net/2025/Sep/9/claude-code-interpreter...
I plan on doing this every time now because ChatGPT gets things wrong constantly, apologizes and changes its facts, while Gemini is cheerful and positive like a salesperson.
These things have given me tremendous doubt after one year of usage.
It’s way too early to decide whether it’s flattening out.
A company that has implemented most current AI technologies in their applicable areas in known-functionally capabilities? That is a vastly larger definition of Full Adoption.
It's the different between access and full utilization. The gulf is massive. And I'm not aware of any major company, or really any, that have said, "yep, we're done, we're doing everything we think we can with AI and we're not going to try to improve upon it."
Implementation of acquired capabilities, implementations... Very early days. And it appears this study's definition is more like user access, not completed implementations. Somewhat annoyingly, I receive 3 or 4 calls a day, sometimes on weekends, from contracting firms looking for leads, TPMs, ML/Data scientists with genai / workflow experience. 3 months ago, without having done anything to put my name out any more that however it had been found before that, I was only getting 1 ever day or two.
I don't think this study is using a useful definition for what they intend to measure. It is certainly not capturing more than a fraction of activity.
That's a massive deal because the AI companies today are valued on the assumption that they'll 10x their revenue over the next couple of years. If their revenue growth starts to slow down, their valuations will change to reflect that
None of the tools make the difference. The thinking is what matters.
Compare to databases. You could probably have plotted a chart of database adoption rates in the '90s as small companies started running e.g. Lotus Notes, FoxPro and SQL server everywhere to build in-house CRMs and back-office apps. Those companies still operate those functions, but now most small businesses do not run databases themselves. Why manage SQL Server when you can just pay for Salesforce and Notion with predictable monthly spend?
(All of this is more complex, but analogous at larger companies.)
My take is the big rise in AI adoption, if it arrives, will similarly be embedded inside application functions.
I think what will happen is in parallel more products will be built that address the engineering challenges and the models will keep getting better. I don't know though if that will lead to another hockey stick or just slow and steady.
What happens to all the debt? Was all this just for chatbots that are finally barely good enough for satnav and image gen that does slightly better photoshop that the layperson can use?
1. No y axis label.
2. It supposedly plots a “rate”, but the time interval is unspecified. Per second? Per month? Per year? Intuitively my best guess is that the rate is per-year. However that would imply the second plot believes we are very near to 100% adoption, which I think we know is false. So what is this? Some esoteric time interval like bi-yearly?
3. More likely, it is not a rate at all, but instead a plot of total adoption. In this case, the title is chosen _very_ poorly. The author of the plot probably doesn’t know what they’re looking at.
4. Without grid lines, it’s very hard to read the data in the middle of the plot.
It's the switch between: know which service to use, consider capabilities, try to get AI to do a thing, if you even have a thing that needs done that it can do; versus: AI just does a thing for you, requiring little to no thought. Very active vs very passive. Use will go up in direct relation to that changeover. The super users are already at peak, they're fully engaged. A software developer wants a very active relationship with their AI; Joe Average does not.
The complexity has to vanish entirely. It's the difference between hiding the extraordinary engineering that is Google search behind a simple input box, and making users select a hundred settings before firing off a search. Imagine if the average search user needed to know something meaningful about the capabilities of Google search or search in general, before using it. Prime Google search (~1998-2016) obliterated the competition (including the portals) with that one simple search box, by shifting all the complexity to the back-end; they made it so simple the user really couldn't screw anything up. That's also why ChatGPT got so far so fast: input box, type something, complexity mostly hidden.
/s