We do have some idea. Kimi K2 is a relatively high performing open source model. People have it running at 24 tokens/second on a pair of Mac Studios, which costs 20k. This setup requires less than a KW of power, so the $0.8-0.15 being spent there is negligible compared to a developer. This might be the cheapest setup to run locally, but it's almost certain that the cost per token is far cheaper with specialized hardware at scale.
In other words, a near-frontier model is running at a cost that a (somewhat wealthy) hobbyist can afford. And it's hard to imagine that the hardware costs don't come down quite a bit. I don't doubt that tokens are heavily subsidized but I think this might be overblown [1].
[1] training models is still extraordinarily expensive and that is certainly being subsidized, but you can amortize that cost over a lot of inference, especially once we reach a plateau for ideas and stop running training runs as frequently.
I walked into that room expecting to learn from people who were
further ahead. People who’d cracked the code on how to adopt AI at scale,
how to restructure teams around it, how to make it work. Some of the
sharpest minds in the software industry were sitting around those tables.
And nobody has it all figured out.
People who say they have are trying to mess with your head.This is one of the most interesting questions right now I think.
I've been taking on much more significant challenges in areas like frontend development and ops and automation and even UI design now that LLMs mean I can be much more of a generalist.
Assuming this works out for more people, what does this mean for the shape of our profession?
I don't think you can find that level of ego anywhere in the software industry or any other industry for that matter. Respetc.
The text is actually about the Thoughtworks Future of Software Development retreat.
[0] Which is not even enough, these are the ones with truly excess money to burn.
Here’s a free idea I’ve had that I have no idea how to implement. I hope somebody much smarter than me will come along, think it’s a great idea, and steal it. I highly encourage you to do so, and I wish you well.
The idea is to have some kind of substrate—like a superpowered AST—that is the true code: the thing that actually gets compiled and run. Humans never look at this directly. Instead, we look at a representation of this code, and we can toggle between different representations of it.
I’m borrowing ideas from topology in mathematics here: if I look at a shape one way, I should be able to transform it into a different shape, but isomorphically, everything is still the same. That would let me look at the same thing in different ways, understand it from different angles, critique it more easily, and maintain it more easily.
Gemini tell me that this idea has already been tried in the past? Projectional Editing? Intentional Programming?
I do like the idea that "all code is tech debt", and we shouldn't want to produce more of it than we need. But it's also worth remembering that debt is not bad per se, buying a house with a mortgage is also debt and can be a good choice for many reasons.
Now producing code is _cheap_. You can write and run code in an automated way _on demand_. But if you do that, you have essentially traded upfront cost for run time cost. It's really only worth it if the work is A) high value and B) intermittent.
There is probably a formula you can write to figure out where this trade off makes sense and when it doesn't.
I'm working on a system where we can just chuck out autonomous agents onto our platform with a plain text description, and one thing I have been thinking about is tracking those token costs and figuring out how to turn agentic workflows into just normal code.
I've been thinking about running an agent that watches the other agents for cost and reads their logs ono a schedule to see if any of what the agents are doing can be codified and turned into a normal workflow, and possibly even _writing that workflow itself_.
It would be analogous to the JVM optimizing hot-path functions... ---
What I do know is that what we are doing for a living will be near unrecognizable in a year or two.
A useful complement is the programmer-level shift: agents are great at narrow, reversible work when verification is cheap. Concretely, think small refactors behind golden tests, API adapters behind contract tests, and mechanical migrations with clear invariants. They fail fast in codebases with implicit coupling, fuzzy boundaries, or weak feedback loops, and they tend to amplify whatever hygiene you already have.
So the job moves from typing to making constraints explicit and building fast verification, while humans stay accountable for semantics and risk.
If useful, I expanded this “delegation + constraints + verification” angle here: https://thomasvilhena.com/2026/02/craftsmanship-coding-five-...
Personally, I'm more interested in whether software development has become more or less pay to win with LLMs?
When we have solid tests, the agent output is useful and we can trust it. When tests are thin or missing, the agents still ship a lot of code, but we spend way more time debugging and fixing subtle bugs.
I agree that AI tools are likely to amplify the importance of quick cycles and continuous delivery.
Local or self hosted LLMs will ultimately be the future. Start learning how to build up your own AI stack and use it day to day. Hopefully hardware catches up so eventually running LLMs on device is the norm.
the part that's tricky is that slow lane and fast lane look identical in a PR. the framework only works if it's explicit enough to survive code review fatigue and context switching. and most teams are figuring that out as they go.
> One large enterprise employee commented that they were deliberately slow with AI tech, keeping about a quarter behind the leading edge. “We’re not in the business of avoiding all risks, but we do need to manage them”.
I’m unclear how this pattern helps with security vis-à-vis LLMs. It makes sense when talking about software versions, in hoping that any critical bugs are patched, but prompt injection springs eternal.
This isn't a case where you have specific code/capital you have borrowed and need to pay for its use or give it back. This is flat out putting liabilities into your assets that will have to be discovered and dealt, someday.
Chinese open source models are dirt cheap, you can buy $20 worth of kimi-k2.5 on opencode and spam it all week and barely make a dent.
Assuming we never got bigger models, but hardware keeps improving, we'll either be serviing current models for pennies, or at insane speeds, or both.
The only actual situation where tokens are being subsidized is free tiers on chat apps, which are largely irrelevant for any sort of useful economic activity.
Token costs are also non-trivial. Claude can exhaust a $20/month session limit with one difficult problem (didn't even write code, just planned). Each engineer needs at least the $200/mo plan - I have multiple plans from multiple providers.