You could cut your MongoDB costs by 100% by not using it ;)
> without sacrificing performance or reliability.
You're using a single server in a single datacenter. MongoDB Atlas is deployed to VMs on 2-3 AZs. You don't have close to the same reliability. (I'm also curious why their M40 instance costs $1000, when the Pricing Calculator (https://www.mongodb.com/pricing) says M40 is $760/month? Was it the extra storage?)
> We're building Prosopo to be resilient to outages, such as the recent massive AWS outage, so we use many different cloud providers
This means you're going to have multiple outages, AND incur more cross-internet costs. How does going to Hetzner make you more resilient to outages? You have one server in one datacenter. Intelligent, robust design at one provider (like AWS) is way more resilient, and intra-zone transfer is cheaper than going out to the cloud ($0.02/GB vs $0.08/GB). You do not have a centralized or single point of failure design with AWS. They're not dummies; plenty of their services are operated independently per region. But they do expect you to use their infrastructure intelligently to avoid creating a single point of failure. (For example, during the AWS outage, my company was in us-east-1, and we never had any issues, because we didn't depend on calling AWS APIs to continue operating. Things already running continue to run.)
I get it; these "we cut bare costs by moving away from the cloud" posts are catnip for HN. But they usually don't make sense. There's only a few circumstances where you really have to transfer out a lot of traffic, or need very large storage, where cloud pricing is just too much of a premium. The whole point of using the cloud is to use it as a competitive advantage. Giving yourself an extra role (sysadmin) in addition to your day job (developer, data scientist, etc) and more maintenance tasks (installing, upgrading, patching, troubleshooting, getting on-call, etc) with lower reliability and fewer services, isn't an advantage.
Their bare-metal servers don't have storage encryption by default, and I don't know for sure about the VM hosts, as I don't have access, but Hetzner never claims that it is encrypted at rest. And there is no mention of storage encryption in their data protection agreement. https://www.hetzner.com/AV/DPA_en.pdf
Also, their data privacy FAQ mentions "you as the customer are responsible for both the data that is stored on your rented server and for the encryption of that data." https://docs.hetzner.com/general/general-terms-and-condition...
I would recommend, just in case, to set up LUKS on your server. You will find many guides for Hetzner.
If you don't do that, seeing your data in the wild is a real scenario. A few years ago, a Youtuber bought some used hard-drives in the hope to recover data to illustrate the risks of not erasing a hard-drive correctly. He eventually bought a hard-drive containing non-encrypted VM disks from Scaleway, a Hetzner competitor. My guess is that some hard drives disappeared before destruction after being decommissioned. Some customers got their shitty source code exposed on a 1.4M views video. Here is the first one: https://www.youtube.com/watch?v=vt8PyQ2PGxI
So, use LUKS.
I love Hetzner for what they offer but you will run into huge outages pretty soon. At least you need two different network zones on Hetzner and three servers.
It's not hard to setup, but you need to do it.
I feel like some of these articles miss a few points, even in this one. The monthly cost of the MongoDB hosting was around $2k... that's less than a FT employee salary, and if it can spare you the cost of an employee, that's not a bad thing.
On the flip side, if you have employee talent that is already orchestrating Kubernetes across multiple clouds, then sure it makes sense to internalize services that would otherwise be external if it doesn't add too much work/overhead to your team(s).
In either case, I don't think the primary driver in this is cost at all. Because that 90% quoted reduction in hosting costs is balanced by the ongoing salary of the person or people who maintain those systems.
Just fixed a bug on my MongoDB instance last night that, due to a config error w/ self-signed certs (the hostname in the replica set config has to match the CN on the cert), that caused MongoDB to rocket to 400% CPU utilization (3x, 8GB, 4VCPU dedicated boxes on DO) due to a weird election loop in the replica set process. Fixing that and adding a few missing indexes brought it down to ~12% on average. Simple mistakes, sure, but the real-world cost of those mistakes is brutal.
They are known to just cancel accounts and cut access.
I moved to another employer that was using Atlas, and the bill rivaled AWS. Unfortunately it was too complex to untangle.
But, there's the time to set all of this up (which admittedly is a one-time investment and would amortize).
And there's the risk of having made a mistake in your backups or recovery system (Will you exercise it? Will you continue to regularly exercise it?).
And they're a 3-person team... is it really worth your limited time/capacity to do this, rather than do something that's likely to attract $3k/mo of new business?
If the folks who wrote the blog see this, please share how much time (how many devs, how many weeks) this took to set up, and how the ongoing maintenance burden shapes up.
[1] https://docs.hetzner.com/robot/dedicated-server/general-info...
They also leaned too heavily on sharding as a universal solution to scaling as opposed to leveraging the minimal cost of terabytes of RAM. The p99 latency increase, risk of major re-sharding downtime, increased restore times, and increased operational complexity weren't worth it for ~1 TB datasets.
Then again, nextjs + vercel + gihtub are awfully convenient.
One of the biggest issues was cost, but we were treated like first class citizens, the support was good, we saw constant updates and features. Using atlas search was fantastic because we didn't have to replicate the data to another resource for quick searching.
Before atlas we were on Compose.io and well mongo there just withered and we were plagued by performance issues
I mean, you're connecting to your primary database potentially on another continent? I imagine your costs will be high, but even worse, your performance will be abysmal.
> When you migrate to a self-hosted solution, you're taking on more responsibility for managing your database. You need to make sure it is secure, backed up, monitored, and can be recreated in case of failure or the need for extra servers arises.
> ...for a small amount of pain you can save a lot of money!
I wouldn't call any of that "a small amount of pain." To save $3,000/month you've now required yourself to become experts in a domain that maybe is out of your depth. So whatever cost saved is now tech debt and potentially having to hire someone else to manage your homemade solution for you.
However, I self-host, and applaud other self-hosters. But sometimes it really has to make business sense for your team.
The number one thing people poo-pooing these "We saved $XXX by getting off public cloud" posts is that each business has different calculus for its risk tolerances, business needs, and opportunity costs. Once a function reaches some form of stability or homeostasis, then hosting it in the public cloud can become a net liability rather than a net asset. Being able to make those decisions impartially is what separates the genuinely good talent from those who conflate TC with wisdom.
Even when public cloud is the right decision, using managed services increasingly isn't. MongoDB Atlas is a managed service with a corresponding price tag to match. Running it in a VPS like Hetzner may shift some of the maintenance and support tasks onto your team, but let's be real - modern databases are designed to be bulletproof, and huge companies operated just fine with a single database instance on bare metal for decades, even with the odd downtime along the way. We ran a MongoDB CE database at a PriorCo on a single VM in a single datacenter for nearly a decade, and it underpinned a substantial chunk of our operations - operations we could do by hand, if needed, during downtime or outages (that never happened). We eventually moved it to AWS DocumentDB not out of cost-savings or necessity, but because a higher-up demanded we do so.
If anything, the visceral rebuke of anyone daring to move off public cloud feels very reminiscent of my own collegial douchebagginess in the 2000s, loudly mocking Linux stans and proclaiming closed source (Microsoft) would run the planet. Past-me was a douchebag then, and the same applies to the AWS-stans of today.
Docker/TypeS
cript NodeIs there a provider similar to Hetzner but US based?
You set up your server. Harden it. Follow all the best practices for your firewall with ufw. Then you run a Docker container. Accidentally, or simply because you don’t know any better, you bind it to 0.0.0.0 by doing 5432:5432. Oops. Docker just walked right past your firewall rules, ignored ufw, and now port 5432 is exposed with default Postgres credentials. Congratulations. Say hello to Kinsing.
And this is just one of many possible scenarios like that. I’m not trying to spread FUD, but this really needs to be stressed much more clearly.
EDIT. as always - thank you HN for downvoting instead of actually addressing the argument.
Thing is, Linode was great 10-15 years ago, then enshittification ensued (starting with Akamai buying them).
So what does enshittification for Hetzner look like? I've already got migration scripts pointed at their servers but can't wait for the eventual letdown.
In all seriousness, this is a recurring pattern on HN and it sends the wrong message. It's almost as bad as vibecoding a paid service and losing private customer data.
There was a thread here awhile ago, 'How We Saved $500,000 Per Year by Rolling Our Own “S3' [1]. Then they promptly got hacked. [2]
[1] https://engineering.nanit.com/how-we-saved-500-000-per-year-...
[2] https://www.cbsnews.com/colorado/news/colorado-mom-stranger-...