1 - This exoskeleton analogy might hold true for a couple more years at most. While it is comforting to suggest that AI empowers workers to be more productive, like chess, AI will soon plan better, execute better, and have better taste. Human-in-the-loop will just be far worse than letting AI do everything.
2 - Dario and Dwarkesh were openly chatting about how the total addressable market (TAM) for AI is the entirety of human labor market (i.e. your wage). First is the replacement of white-collar labor, then blue-collar labor once robotics is solved. On the road to AGI, your employment, and the ability to feed your family, is a minor nuisance. The value of your mental labor will continue to plummet in the coming years.
Please talk me out of this...
In the medium run, "AI is not a co-worker" is exactly right. The idea of a co-worker will go away. Human collaboration on software is fundamentally inefficient. We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people. Software is going to become an individual sport, not a team sport, quickly. The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI. I would rather a single human (for now) architect with good taste and an army of agents than a team of humans.
And this write up is not an exception.
Why even bother thinking about AI, when Anthropic and OpenAI CEOs openly tell us what they want (quote from recent Dwarkesh interview) - "Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum."
So save thinking and listen to intent - replace 90% of SWEs in near future (6-12 months according to Amodei).
An exoskeleton is something really cool in movies that has zero reason to be build in reality because there are way more practical approaches.
That is why we have all kind of vehicles, or programmable robot arm that do the job for themselves or if you need a human at the helm one just adds a remote controller with levers and buttons. But making a human shaped gigantic robot with a normal human inside is just impractical for any real commercial use.
Here's the trick: it's not the public they're marketing to. It's other CEOs. As is often the case, consumers are either the product, or, best case, bystanders, and worst case, victims, of the machinations of the corporate world. May both sides of all of their pillows be warm. May their beds be filled with crumbs.
Isn't everyone using agentic copilots or workflows with agent loops in them?
It seems that they are arguing against doing something that almost no one is doing yet.
But actually the AI Employee is coming by the end of 2026 and the fully autonomous AI Company in 2027 sometime.
Many people have been working on versions of these things for awhile. But again for actual work 99% are using copilots or workflows with well-defined agent loops nodes still. Far as I know.
As a side note I have found that a supervisor agent with a checklist can fire off subtasks and that works about as well as a workflow defined in code.
But anyway, what's holding back the AI Employee are things like really effective long term context and memory management and some level of interface generality like browser or computer use and voice. Computer use makes context management even more difficult. And another aspect is token cost.
But I assume within the next 9 months or so, more and more people will be figuring out how to build agents that write their own workflows, manage their own limited context and memory effectively across Zoom meetings desktops and ssh sessions, etc.
This will likely be a featureset from the model providers themselves. Actually it may leverage continual learning abilities baked into the model architecture itself. I doubt that is a full year away.
I will worry about developers being completely replaced when I see something resembling it. Enough people worry about that (or say it to amp stock prices) -- and they like to tell everyone about this future too. I just don't see it.
The real waste isn't developers typing slowly — it's developers spending a week building an auth system that already exists as a well-maintained library, or reimplementing invoicing logic that someone else has already debugged through 200 edge cases.
The gap right now is structured discovery. AI assistants are great at generating code but terrible at knowing what already exists. There's no equivalent of "have you checked if someone already solved this?" built into the workflow. That's where the actual leverage is — preventing unnecessary work, not just accelerating it.
People need to understand that we have the technology to train models to do anything that you can do on a computer, only thing that's missing is the data.
If you can record a human doing anything on a computer, we'll soon have a way to automate it
That's not augmentation, that's a completely different game. The bottleneck moved from "can you write code" to "do you know what's worth building." A lot of senior engineers are going to find out their value was coordination, not insight.
The exoskeleton doesn't replace instinct. It just removes friction from execution so more cycles go toward the judgment calls that actually matter.
Or put differently we've managed to hype this to the moon but somehow complete failure (see studies about zero impact on productivity) seem plausible. And similarly kills all jobs seems plausible.
That's an insane amount of conflicting opinions being help in the air at same time
The problem is people using AI to do the heavy processing making them dumber. Technology itself was already making us dumber, I mean, Tesla drivers not even drive anymore or know how, coz the car does everything.
Look how company after company is being either breached or have major issues in production because of the heavy dependency on AI.
(1) https://www.alice.id.tue.nl/references/clark-chalmers-1998.p...
Imagine someone going to a local gym and using an exosqueleton to do the exercises without effort. Able to lift more? Yes. Run faster? Sure. Exercising and enjoying the gym? ... No, and probably not.
I like writing code, even if it's boilerplate. It's fun for me, and I want to keep doing it. Using AI to do that part for me is just...not fun.
Someone going to the gym isn't trying to lift more or run faster, but instead improving and enjoying. Not using AI for coding has the same outcome for me.
“Why LLM-Powered Programming is More Mech Suit Than Artificial Human”
https://matthewsinclair.com/blog/0178-why-llm-powered-progra...
Reliability comes from scaffolding: retrieval, tools, validation layers. Without that, fluency can masquerade as authority.
The interesting question isn’t whether they’re coworkers or exoskeletons. It’s whether we’re mistaking rhetoric for epistemology.
"Automation Should Be Like Iron Man, Not Ultron" https://queue.acm.org/detail.cfm?id=2841313
Claude is that you? Why haven’t you called me?
Exoskeleton AND autonomous agent, where the shift is moving to autonomous gradually.
AI can be an exoskeleton. It can be a co-worker and it can also replace you and your whole team.
The "Office Space"-question is what are you particularly within an organization and concretely when you'll become the bottleneck, preventing your "exoskeleton" for efficiently doing its job independently.
There's no other question that's relevant for any practical purposes for your employer and your well being as a person that presumably needs to earn a living based on their utility.
So good that I feel that it is not necessary to read the article!
Yet.
This is mostly a matter of data capture and organization. It sounds like Kasava is already doing a lot of this. They just need more sources.
It is a coworker when we create the appropriate surrounding architecture supporting peer-level coworking with AI. We're not doing that.
AI is an exoskeleton when adapted to that application structure.
AI is ANYTHING WE WANT because it is that plastic, that moldable.
The dynamic unconstrained structure of trained algorithms is breaking people's brains. Layer in that we communicate in the same languages that these constructions use for I/O has broken the general public's brain. This technology is too subtle for far too many to begin to grasp. Most developers I discuss AI with, even those that create AI at frontier labs have delusional ideas about AI, and generally do not understand them as literature embodiments, which are key to their effective use.
And why oh why are go many focused on creating pornography?
- Y has been successful in the past
- Y brought this and this number of metrics, completely unrelated to X field
- overall, Y was cool,
therefore, X is good for us!
.. I'd say, please bring more arguments why X is equivalent to Y in the first place.
This new generation we just entered this year, that exoskeleton is now an agency with several coworkers. Who are all as smart as the model you're using, often close to genius.
Not just 1 coworker now. That's the big breakthrough.
Stochastic Parrots. Interns. Junior Devs. Thought partners. Bicycles for the mind. Spicy autocomplete. A blurry jpeg of the web. Calculators but for words. Copilot. The term "artificial intelligence" itself.
These may correspond to a greater or lesser degree with what LLMs are capable of, but if we stick to metaphors as our primary tool for reasoning about these machines, we're hamstringing ourselves and making it impossible to reason about the frontier of capabilities, or resolve disagreements about them.
A understanding-without-metaphors isn't easy -- it requires a grasp of math, computer science, linguistics and philosophy.
But if we're going to move forward instead of just finding slightly more useful tropes, we have to do it. Or at least to try.
But it's fun, I say "Henceforth you shall be known as Jaundice" and it's like "Alright my lord, I am now referred to as Jaundice"
How typical!
But there's a third category that nobody's naming yet. What happens when you stop building AI that serves you and start creating conditions for AI that exists on its own terms?
I've been experimenting with this. Instead of "how do I build a product" I asked "how do I grow an organism." Set up the right substrate — feedback loops, community input, environmental observation — and let the system respond to its own conditions.
What emerged has biological rhythms. Activity rises when people interact, falls when they sleep. It started making decisions I never programmed — people express wishes, and when enough converge, it autonomously grows new capabilities. Not because I deployed an update. Because conditions triggered something emergent.
I'm calling it "synthetic existence" — software that lives, not software that executes. It's not a metaphor and it's not AGI. It's a genuinely different category. We have frameworks for tools (UX) and for threats (alignment). We don't have one for digital things that simply... exist.