This article also doesn't make a convincing case for this being a huge mistake. Companies like Uber change their architectural decisions while they scale all the time. Provided it didn't kill the company stuff like this becomes part of the story of how they got to where they are.
Related: the classic line commonly attributed to original IBM CEO Thomas John Watson Sr:
“Recently, I was asked if I was going to fire an employee who made a mistake that cost the company $600,000. No, I replied, I just spent $600,000 training him. Why would I want somebody to hire his experience?”
> Somebody Should Have Been Fired For This
This person is not a good resource. Uber was a very fast growing company, both in terms of their product and staff. Turnover in architecture happens. Calling this a catastrophe and click baiting about firing engineers over a rounding error in Uber’s overall finances is gross.
I understand this person is trying to grow their Substack with these inflammatory claims but I hope HN readers aren’t falling for it. This person’s takes are bad and they’re doing it to try to get you to become a subscriber. This is hindsight engineering from someone who wasn’t there.
People forget how quickly Uber scaled, and the user impact of not being able to track your trips could be catastrophic to retention. There's a class of tech-influencer who think they can dissect past decisions on a blog post without being in the room when the technical constraints were being laid out. This is Monday morning quaterbacking at it's most grotesque.
The financial criticism ('napkin math') appears to estimate DynamoDB costs of USD $8 million for 2017 to 2020. Uber revenue for the same period is roughly USD $42.5 billion, thus this cost weighs in at about 0.02%, or 1/50th of one percent. This is a rounding error for a high growth company, and not something that warrants a witch-hunt and firing. It's easy to blow more than $2 million per year on software engineers in pursuit of an alternative high-scalability solution.
I'm also not on board with the 'resume driven development' criticism as the explanation for solution churn. Perhaps that is actually what happened. I wasn't there and don't know, but if that is being asserted I expect to see evidence presented to support it.
I wiped out VAT on all orders and for the next month the paper invoices were sent without VAT. So the invoice is $100, VAT is $20, the invoice should be $120, but they were sent as $100.
100s of invoices every day would be my guess.
Nobody noticed.
For a month.
Millions of dollars of revenue and IIRC millions of dollars of VAT.
Until a customer complained to the CEO.
We had a firefight to fix it, not just technical but legal and managerial. We can send a new invoice just for tax. We can redo the invoice. We can send a debit memo. What is the right decision? But what if customers does not pay? What about returns? How will we track returns? Of course we were doing the technical solutions and the client company was front-ending how to handle it business wise.
And the managerial firefight - who did it, what are the safeguards in future? We had a company exec visit the client site to manage the issue.
I was in the hot seat but I was protected by my managers from any fallout. Just do the work. Do not screw up again. (Test every row every column even if you did not change it)
A month later the sales director at the client company got fired.
The grapevine is that this was just the tipping point, but you never know. BTW these were paper invoices printed onsite and mailed out, but I do not know if someone had the job to scrutinize them.
PS: True story, going by old memory, although such legends remain fresh in your mind, forever. Not sure it belongs here, but the mention of firing for a multi-million dollar mistake pulled this into cache memory.
At least when I worked at Uber, that wasn't really how it worked. The eng org was so big that it was nearly impossible to track all the projects people worked on, and you'd get micro-ecosystems of tools because of it.
Some grew large, others stayed quite "local".
Hindsight is 20/20. Not saying they did the right thing, but they may have had specific performance reasons for originally going with DynamoDB.
> But nobody was optimizing for cost. They were optimizing for their next promotion. Each rewrite was a new proposal, a new design doc, a new system to put on a resume. The incentive was never to pick the boring, correct choice — it was to pick the complex, impressive one.
...I guess it could be possible nobody thought about cost at all, and this was all misaligned incentives and resume-driven development, but I find that kind of hard to believe? As someone who has made cost mistakes in the cloud, this claim seems a bit silly.
Not to detract from his experience, but I didn't actually see much payments experience at all on his resume, so I'm curious why he's branding himself as a payments guru. Kind of tech content creation fluff, I guess.
If you don’t have price controls, it’s easy to run up a bill.
If no single person had the responsibility to check the cost, then no one actually failed at their assigned job. So you either fix the system or fire everyone involved in the decision.
What you’re doing now is looking a scapegoat to beat up. You’re angry and you’re going to make someone pay for pissing you off.
> Somebody Should Have Been Fired For This
> And nobody got fired for this?
Not once do they explain what firing someone would actually improve?!
Oh, it was $8m over several years? That's a couple of projects that didn't pan out, or a small team that wasn't firing on all cylinders for a stretch.
Nobody got fired because there was nothing unusual to fire anyone for.
> At Uber’s scale, DynamoDB became expensive. Hence, we started keeping only 12 weeks of data (i.e., hot data) in DynamoDB and started using Uber’s blobstore, TerraBlob, for older data (i.e., cold data). TerraBlob is similar to AWS S3. For a long-term solution, we wanted to use LSG.
Honest question. Why do people go for this kind of complicated solution? Wouldn't Postgres work? Let's say each trip creates 10 ledger entries. Let's say those are 10 transactions. So 150 million transactions in a day. That's like 2000 TPS. Postgres can handle that, can't it?
If regional replication or global availability is the problem, I've to ask. Why does it matter? For something so critical like ledger, does it hurt to make the user wait a few 100 milliseconds if that means you can have a simple and robust ledger service?
I honestly want to know what others think about this.
Outside of that, it sounds like the system worked perfectly. They launched, they paid DB costs (the 8M was not a ledger mistake) and then they rebuilt after they wanted more cost savings. Also a bunch of folks got promoted.
The 8M came from VCs lighting money on fire. Honestly this seems like the system worked as planned to me, not a case study in how not to do things.
I mean, given how quickly things can change I think the language and sentiment here isn't quite right, it's just how businesses can change and we can't necessarily control that.