> Our inability to see opportunities and fulfillment in life as it is, leads to the inevitable conclusion that life is never enough, and we would always rather be doing something else.
I agree with the article completely, as it effectively names an uneasy feeling of hesitation I’ve had all along with how I use LLMs. I have found them tremendously valuable as sounding boards when I’m going in circles in my own well-worn cognitive (and sometimes even emotional) ruts. I have also found them valuable as research assistants, and I feel grateful that they arrived right around the time that search engines began to feel all but useless. I haven’t yet found them valuable in writing on my behalf, whether it’s prose or code.
During my formal education, I was very much a math and science person. I enjoyed those subjects. They came easily to me, which I also enjoyed. I did two years of liberal arts in undergrad, and they kicked my butt academically in a way that I didn’t realize was possible. I did not enjoy having to learn how to think and articulate those thoughts in seminars and essays. I did not enjoy the vulnerability of sharing myself that way, or of receiving feedback. If LLMs had existed, I’m certain I would have leaned hard on them to get some relief from the constant feeling of struggle and inadequacy. But then I wouldn’t have learned how to think or how to articulate myself, and my life and career would have been significantly less meaningful, interesting, and satisfying.
In my own work, I found the real failure mode wasn’t using AI, it was automating the wrong parts. When I let AI generate summaries or reflections for me, I lost the value of the task. Not because thinking disappeared, but because the meaning-making did.
The distinction that’s helped me is: - If a task’s value comes from doing the thinking (reflection, synthesis, judgment), design AI as a collaborator, asking questions, prompting, pushing back. - If the task is execution or recall, automate it aggressively.
So the problem isn’t that we outsource thinking, it’s that we sometimes bypass the cognitive loops that actually matter. The design choice is whether AI replaces those loops or helps surface them.
I wrote more about that here if useful: https://jonmagic.com/posts/designing-collaborations-not-just...
We already saw a softer version of this with web search and GPS: people didn’t suddenly forget how to read maps, but schools and orgs stopped teaching it, and now almost nobody plans a route without a blue dot. I suspect we’ll see the same with writing and judgment: the danger isn’t that nobody thinks, it’s that fewer people remember how.
My fundamental argument: The way the average person is using AI today is as "Thinking as a Service" and this is going to have absolutely devastating long term consequences, training an entire generation not to think for themselves.
Google AI describes it as:
This is the most common form, often used in debates about technology, immigration, or retirement.
Definition: The belief that there is a set, finite amount of work to be done in an economy.
The Fallacy: Assuming that if one person works more, or if a machine does a job, there is less work left for others.
Reality: An increase in labor or technology (like AI or automation) can increase productivity, lower costs, and boost economic activity, which actually creates more demand for labor.
Examples:
"If immigrants come to this country, they will take all our jobs" (ignoring that immigrants also consume goods and create demand for more jobs).
"AI will destroy all employment" (ignoring that technology typically shifts the nature of work rather than eliminating it).https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...
A criticism of the use of large language models (LLMs) is that it can deprive us of cognitive skills. Are some kinds of use are better than others? Andy Masley's blog says "thinking often leads to more things to think about", so we shouldn't worry about letting machines do the thinking for us — we will be able to think about other things.
My aim is not to refute all his arguments, but to highlight issues with "outsourcing thinking".
Masley writes that it's "bad to outsource your cognition when it:"
- Builds tacit knowledge you'll need in future.
- Is an expression of care for someone else.
- Is a valuable experience on its own.
- Is deceptive to fake.
- Is focused in a problem that is deathly important to get right, and where you don't totally trust who you're outsourcing it to.
How we choose to use chatbots is about how we want our lives and society to be.
That's what he has to say. Plus some examples, which help make the message concrete. It's a useful article if edited properly.
Notice something subtle.
Early inventions extend coordination. Middle inventions extend memory. Later inventions extend reasoning. The latest inventions extend agency.
This suggests that human history is less about tools and more about outsourcing parts of the mind into the world.
Applying this analogy to human relationships - neocortex allowed us to be more social. Social communication with limbic system was mostly "you smell like a member of our species and I want to have sex with you". So having neocortex expanded our social skills to having friends etc.
I think AI will have a similar effect. It will allow us to individually communicate with large amount of other people (millions). But it will be a different relationship than what we today call "personal communication", face to face, driven by our neocortex. It will be as incomprehensible for our neocortex as our language is incomprehensible for the limbic system.
To his point: personally, I find it shifts 'where and when' I have to deal with the 'cognitive load'. I've noticed (at times) feeling more impatient, that I tend to skim the results more often, and that it takes a bit more mental energy to maintain my attention..
The things that are actually dangerous in our lives? Not informing ourselves enough about science, politics, economics, history, and letting angry people lead us astray. Nobody writes about that. Instead they write about spooky things that can't be predicted and shudder. It's easier to wonder about future uncertainty than deal with current certainty.
This ties with how I sometimes describe current generation AI as a form of mechanized intelligence: like Babbage’s calculating machine, but scaled up to be able to represent all kinds of classes of things.
And in this perspective that I’m circling these days where I’m currently coming down on it is maybe the effect of this realization will be something like the dichotomy outlined in the Dune series: namely, that between mechanized intelligence embodied by mentats and the more intuitive and prescient aspects of cognition embodied by the Benni Jessarit and Paul’s lineage.
A simple but direct way to describe this transition in perspective may be that we come to see what we formally thought of as intelligence in the West/reductive tradition as a form of mechanized calculation that it’s possible to outsource to automatic non-biological processes, and we start to lean in more deeply to the more intuitive and prescient aspects of cognition.
One thing I’m reminded of is how Indian yogic texts describe various aspects of mind.
I’m not sure if it’s a one-to-one mapping because I’m not across that material but merely the idea of distinguishing between different aspects of mind is something with precedent; and central to that is the idea of removing association between self identity and the aspects of mind.
And so maybe one of the effects for us as a society will be something akin to that.
Apparently he think of planning a vacation as some artistic expression.
The article he references gives this example:
“Is it lazy to watch a movie instead of making up a story in your head?”
Yes, yes it is, this was a worry when we transitioned from oral culture to written culture, and I think it was probably prescient.
For many if not most people cultural or technological expectations around what skills you _have_ to learn probably have an impact on total capability. We probably lost something when Google Maps came out and the average person didn’t have to learn to read a map.
When we transitioned from paper and evening news to 24 hour partisan cable news, I think more people outsourced their political opinions to those channels.
I hate this take, computer code is just as rich in personality as writing. I can tell a tremendous amount about what kind of person someone is solely based off their code. Code is an incredibly personal expression of ones mental state, even if you might not realize it. LLMs have dehumanized this and the functional outcomes become FAR more unpredictable.
If outsourcing thought is beneficial, those who practice it will thrive; if not, they will eventually cease to practice it, one way or another.
Thought, as any other tool, is useful when it solves more problems than it creates. For instance, an ability to move very fast may be beneficial if it gets you where you want to be, and detrimental, if it misses the destination often enough, and badly enough. Similarly, if outsourced intellectual activities miss the mark often enough, and badly enough, the increased speed is not very helpful.
I suspect that the best results would be achieved by outsourcing relatively small intellectual acts in a way that guarantees very rare, very small errors. That is, AI will become useful when AI becomes dependable, comparable to our other tools.