I genuinely challenge someone spending $5-$10k a month to demonstrate how that turns into $50-$100k in value. At a corporate level, I'd much rather hire a junior engineer who spends $100-$200/month and becomes productive then try and rationalize $100k/year in token spend.
> which means figuring out if the company can afford this level of productivity at scale.
If it was actually productive, then the revenue would increase and affordability wouldn't be a question.
Well, that’s to be expected when using AI tools becomes relevant in your performance evaluation.
This is the thing that boggles my mind. They spent their budget. They have 4 months of data. What do they have to show for it?
I'm not a hater; I'm not a luddite. I have a $200 Max plan and I use it.
But are you saying that Uber made this tool available, urged everybody to use it, and is confused about what happens when it worked? It's one thing if they decide AI isn't productive enough to be worth the cost.
Are they out of ideas on what to build next, or something?
Yes, productivity implies revenue (or cost reduction), and revenue is measurable.
However:
1. You spend money today to build features that drive revenue in the future, so when expenses go up rapidly today, you don’t yet have the revenue to measure.
2. It’s inherently a counterfactual consideration: you have these features completed today, using AI. You’re profitable/unprofitable. So AI is productive/unproductive, right? No. You have to estimate what you would’ve gotten done without AI, and how much revenue you would’ve had then.
3. Business is often a Red Queen’s race. If you don’t make improvements, it’s often the case that you’ll lose revenue, as competitors take advantage.
4. Most likely, AI use is a mixture of working on things that matter and people throwing shit against the wall “because it’s easy now.” Actually measuring the potential productivity improvements means figuring out how to keep the first category and avoid the second.
This isn’t me arguing for or against AI. It’s just me telling you not to be lazy and say “if it were productive you’d be able to measure it.”
The AI spend does not appear to be a significant chunk of R&D spending (0.3% in 4 months or 1% annualized). If they didn't plan for it, sure, it's not peanuts in the budget, but in context not that much.
The real question is, what did they get for that amount? The article claims that 70% of the code commit is now AI-generated, so presumably the code passed review and tests. Did it accelerate the feature count? did it reduce quality problems? Did it lead to other benefits?
Sadly the article is silent on the outcomes, besides the higher spend.
Maybe 4 months is too soon to assess the benefits. On the other hand, in an agile world ...
If I were an engineer at Uber, why wouldn't I select gpt 5.5 pro @ very high thinking + fast mode for a prompt? There's no incentive not to use the most powerful (and thus most expensive) model for even the smallest of changes.
I tried one of these prompts for some tests I'm doing for image->html conversion, and a single prompt cost me $40. For someone that's paying that themselves, I'd pretty much never use this configuration. For someone at a large company where someone else is footing the bill, I'd spin these up regularly (the output was significantly better, fwiw). For engineers they're being rated on what they deliver, not the expenditure to get there.
There are ways to do this cheaply, but there are no incentives for engineers to do so.
1. You get out of it what you put into it. A savvy CTO might be incredibly excited by everything they can do with agents, and improperly think that all the software engineers can do the same thing, when in reality your org's average software engineers might not have the creativity to even think of many cases where it could save them work. So by mandating agent usage, you might find that productivity hasn't improved while AI costs have increased.
2. When using AI, there are two gaps that become more obvious. First is the gap of: who tells the agent what to do? In many orgs, product isn't technically savvy enough to come up with a detailed spec/plan that LLM can use. And many cog-in-machine developers aren't positioned to come up with the spec, they just want to implement it. By expecting work to be implemented by agent-using developers, you might instead find a lot of idle workers waiting for work to show up. Second is the qa/review cycle. You've introduced a big change to the org but are you really saving cost or shifting it?
I'm all for introducing LLM as optional to help existing developers increase velocity and quality, but I think the "let's restructure the org" movement is really dicey, especially for mid-size or smaller employers.
They gave up on self-driving, so that's not it.
At the same time the subscription will allow the same usage for hundreds of dollars a month.
Either Anthropic is absolutely hosing API users, massively subsidizing subscriptions, or a little bit of both.
I've been able to get by with the $20pm Pro subscription and reap great value out of Claude Code.
I feel like it really is about:
- Don't feed it the works of Shakespeare into the context window if all it's working on is a few files. I actually don't have a Claude.md file in my projects.
- I write the prompt as if I was giving instructions to another developer or to myself on how I want to approach a specific coding, with a numbered step plan. I've actually been able to take the details written into a Jira ticket on a work project, feed it into Clade Code, and get really good results from it.
- If you are responsible for the output, then you need to review the output - that does put a natural constraint on the tool's usage, but ultimately it is you who uses the tool, not the other way around.
I feel like that's the thing - you have to find the right cadence, just like with running or driving a car - you need to find the level at which you control the car, at which you maintain a consistent pace, and at which you get code that does what you need it to do and meets the quality threshold you want.
Here's a much better article: https://aimagazine.com/news/why-uber-has-already-burned-thro...
Years ago I did work for a company that was spending over a million on Oracle product licenses and I was part of the consultant team they hired to rip it all out and just go for simple maintainable code based on open source products. Not only did it transform into a codebase that the average newly hired developer could maintain, you also had the savings of not paying Oracle a significant portion of your revenue.
I feel like that will repeat itself in a few years time with the current cloud and AI train everyone is on.
I haven't been in a professional setting for a while, I just code for fun nowadays so perhaps I'm somewhat out of the loop.
This infers value from spend, which makes no sense. Burning the budget tells us engineers like the tool, not that it's producing value.
Show me how to make two dollars whilst spending one, and budget isn't a problem.
That's...not exactly a lot per engineer. It sounds like they just didn't budget correctly. Especially if the net of that work is more features that would have otherwise required hiring more engineers, which would cost a lot more than $500 to $2000 a month.
Tokenmaxxing seems more and more like a way to encourage experimentation and learning, and incidents like this are a part of learning. Like, today devs simply use the most expensive model by default, even to do extremely simple things. This is obviously wasteful and costly, and budgets will soon be imposed, but this is how they're figuring out the economics.
For instance, like we estimate story points, we may estimate token budgets. At that point, why waste time and money invoking a model for a simple refactor when you could do it with a few keystrokes in an IDE? And why use a frontier model when an open-source local model could spit out that throwaway script? Local models can be tokenmaxxed, but frontier models will still be needed and will be used judiciously. Those are essentially trade-offs, and will eventually be empirically driven, which is what engineering is largely about.
So economics will soon push engineers back to do what they're paid to do: engineering. Just that it will look very different compared to what we're used to.
[0]: https://finance.yahoo.com/sectors/technology/articles/ubers-...
> what started as an experiment in productivity became a runaway success
and
> figuring out if the company can afford this level of productivity at scale
It seems like they're equating "developers are spending a ton of money on this" with "this is creating a ton of value".
I'm not saying that AI tools aren't valuable, but the article doesn't question this equivalence at all.
It's been like this for months. I finally got my explanation.
That's a bit of a logical leap with no demonstrable increase in productivity.
All this shows is that they're spending a lot more on AI than they budgeted for. Nothing else.
I'm considering rolling out something similar but am not sure if it would exceed the expenses of Claude Code Review at an estimated $20 per PR.
Exactly how Anthropic, OpenAI and co are selling it.
or did the engineers just chill and let claude take over daily duties? (this is also a benefit for employees in my opinion)
Surprised Pikachu moment.
And it's going to become even more expensive when AI companies start charging to actually make a profit.
And it works because it won’t stop until the rust compiles. But the code is garbage and makes bad decisions that no junior would. Unmaintainable junk and sometimes I spend more time refactoring than if I would of just built it myself.
People here talking about generating 100ks LoC a month and I’m wondering if it’s a skill issue with me, or Codex or if I should pull all my investments out of companies heavily invested in AI like uber.
I wonder how this will end as AI becomes more expensive to use. If you can't quantify ROI then I guess you're cooked.
Also wonder if there is some perverse incentive for models to be verbose to juice tokens.
Successfully burning through cash and tokens, alright, but what have they gotten out of it?
As a founder, the question I always have is "what is the marginal value per token relative to engineer-hours saved." More of a gut feel at the moment, but would be great to calculate.
1 - Company mandate, start using AI
2 - You're afraid? Here's a mandate!
3 - (Devs and others discover Claude Code features where the coolest burn mad tokens)
4 - Um, yeah we're going to have to take a look at the spend here
5 _
What's 5?
We know steps 3 and 4 will cycle a bit more, and we know it's going to cost more - these were startup teaser costs.
When you enter one single inquiry of "find and fix the memory leak in the billing service" you are not submitting just one single inquiry. The tool is searching through an entire code repository for relevant code, pulling 15 related files into context (easily 200k+ tokens) proposing a fix, running the test suite and failing, taking an entire stack trace of errors into context and looping to keep iterating towards the solution.. In that process you can loop multiple times (10+) in a very short period of times (within 5 minutes). While you grab a cup of coffee you will have consumed $20 in token usage. At the enterprise level (like with Uber) when you multiply that out by thousands of software developers using it as a personal shell tool your budget disappears very very quickly.
And on your point about the junior developer: Comparing $100,000/year in tokens to hiring a junior developer is such a ridiculous false equivalency that even makes you question whether they even understand how to make such a comparison.
The cost to a business of one junior engineer with a $100,000 salary is not just the $100,000 in salary but also an additional $40,000+ in benefits and taxes, as well as in hardware.
Also, you are disregarding another cost of hiring junior engineers that is their mentorship cost. Each week, your senior and staff engineers spend hours mentoring junior engineers by reviewing their code, pairing with them, and unblocking their progress. Mentoring requires a substantial amount of time and will be expensive to your business.
The return on investment (ROI) for the $10,000 monthly expenditure on tokens is not so much about replacing the junior engineer with the AI. Instead, the ROI is that your senior engineers can use the huge amount of compute power to create boilerplate and tests, and refactor their code 3x quicker than if they had to mentor junior engineers. In addition, LLMs do not sleep, require one-on-ones, or leave for another company for 20% more pay in 18 months, when the value to the code base made them an asset to your business.
Lastly, the main reason that Uber has problems with their AI business is that due to the UX of these agentic tools, developers think of the API calls made to the AI as free and as a result, treat them like a basic grep command.
They are using it to mean a mechanism that produces prodigious amounts of toxic waste. That does not conform to the historical understanding of the word.
... but the key fact about "$500-$2000" per engineer does not appear there, and seems to be fabricated.
Where oh where can I find clients like these??
I’ve been using all these tools since they started popping out around 2021 personally and professionally. I probably built four or five products at this point with assistance, not to mention the thousands and thousands of back-and-forth conversations for research or search or rubber ducking or whatever.
I have never spent more than whatever the professional max plan is that is consistently $20 a month.
I asked a friend of mine who spent a couple hundred dollars in like an few hours how they did it. The answer was they basically getting these agent groups of agents stuck in a loop and they’re constantly just generating verbose bullshit that is not even interrogated and doesn’t come out with any artifact that is inspectable no matter how expert you are.
The couple of stories I have heard of these massive crazy spends are people literally just assuming these things can complete an entire human task in one shot, so they continue to hit the “spin the wheel” button until they get something closer to what they want
But I’ve yet to see that actually work
and it actually flies in the face of every instruction guide or documentation or prompt engineering process that has been described over the last almost 5 years
How are they calculating that? They could be using my tool, Buildermark, but I do t think they are: https://buildermark.dev