For client projects, however, I always try and sell them on paying the AWS fees, simply because it shifts the responsibility of the hardware being "up" to someone else. It does not inherently solve the downtime problem, but it allows me to say, "we'll have to wait until they've sorted this out, Ikea and Disney are down, too."
Doesn't always work like that and isn't always a tried-and-true excuse, but generally lets me sleep much better at night.
With limited budgets, however, it's hard to accept the cost of RDS (and we're talking with at least one staging environment) when comparing it to a very tight 3-node Galera cluster running on Hetzner at barely a couple of bucks a month.
Or Cloudflare, titan at the front, being down again today and the past two days (intermittently) after also being down a few weeks ago and earlier this year as well. Also had SQS queues time out several times this week, they picked up again shortly, but it's not like those things ...never happen on managed environments. They happen quite a bit.
> If you're just starting out in software & want to get something working quickly with vibe coding, it's easier to treat Postgres as just another remote API that you can call from your single deployed app
> If you're a really big company and are reaching the scale where you need trained database engineers to just work on your stack, you might get economies of scale by just outsourcing that work to a cloud company that has guaranteed talent in that area. The second full freight salaries come into play, outsourcing looks a bit cheaper.
This is funny. I'd argue the exact opposite. I would self host only:
* if I were on a tight budget and trading an hour or two of my time for a cost saving of a hundred dollars or so is a good deal; or
* at a company that has reached the scale where employing engineers to manage self-hosted databases is more cost effective than outsourcing.
I have nothing against self-hosting PostgreSQL. Do whatever you prefer. But to me outsourcing this to cloud providers seems entirely reasonable for small and medium-sized businesses. According to the author's article, self hosting costs you between 30 and 120 minutes per month (after setup, and if you already know what to do). It's easy to do the math...
-Backups: the provider will push a full generic disaster-recovery backup of my database to an off-provider location at least daily, without the need for a maintenance window
-Optimization: index maintenance and storage optimization are performed automatically and transparently
-Multi-datacenter failover: my database will remain available even if part(s) of my provider are down, with a minimal data loss window (like, 30 seconds, 5 minutes, 15 minutes, depending on SLA and thus plan expenditure)
-Point-in-time backups are performed at an SLA-defined granularity and with a similar retention window, allowing me to access snapshots via a custom DSN, not affecting production access or performance in any way
-Slow-query analysis: notifying me of relevant performance bottlenecks before they bring down production
-Storage analysis: my plan allows for #GB of fast storage, #TB of slow storage: let me know when I'm forecast to run out of either in the next 3 billing cycles or so
Because, well, if anyone provides all of that for a monthly fee, the whole "self-hosting" argument goes out of the window quickly, right? And I say that as someone who absolutely adores self-hosting...
Now with pgbouncer (or whatever other flavor of sql-aware proxy you fancy) you can greatly reduce the complexity involved in managing conventionally complex read/write routing and sharding to various replicas to enable resilient, scalable production-grade database setups on your own infra. Throw in the fact that copy-on-write and snapshotting is baked into most storage today and it becomes - at least compared to 20 years ago - trivial to set up DRS as well. Others have mentioned pgBackRest and that further enforces the ease with which you can set up these traditionally-complex setups.
Beyond those two significant features there isn't many other reasons you'd need to go with hosted/managed pgsql. I've yet to find a managed/hosted database solution that doesn't have some level of downtime to apply updates and patches so even if you go fully hosted/managed it's not a silver bullet. The cost of managed DB is also several times that of the actual hardware it's running on, so there is a cost factor involved as well.
I guess all this to say it's never been a better time to self-host your database and the learning curve is as shallow as it's ever been. Add to all of this that any garden-variety LLM can hand-hold you through the setup and management, including any issues you might encounter on the way.
I would expect a little bit more as a cost of the convenience, but in my experience it's generally multiple times the expense. It's wild.
This has kept me away from managed databases in all but my largest projects.
In reality, most database issues are slow queries or connection pool exhaustion - things that happen during business hours when you're actively developing. The actual database process itself just runs. I've had more AWS outages wake me up than Postgres crashes.
The cost savings are real, but the bigger win for me is having complete visibility. When something does go wrong, I can SSH in and see exactly what's happening. With RDS you're often stuck waiting for support while your users are affected.
That said, you do need solid backups and monitoring from day one. pgBackRest and pgBouncer are your friends.
In case you want to self host but also have something that takes care of all that extra work for you
Later I ran a v2 of that service on k8s. The architecture also changed a lot, hosting many smaller servers sharing the same psql server(Not really microservice-related, think more "collective of smaller services ran by different people"). I have hit some issues relating to maxing out the max connections, but that's about it.
This is something I do on my free time so SLA isn't an issue, meaning I've had the ability to learn the ropes of running PSQL without many bad consequences. I'm really happy I have had this opportunity.
My conclusion is that running PSQL is totally fine if you just set up proper monitoring. If you are an engineer that works with infrastructure, even just because nobody else can/wants to, hosting PSQL is probably fine for you. Just RTFM.
There is a whole raft of reasons why you might be a candidate for self-hosting, and a whole raft of reasons why not. This article is deeply reductive, and so are many of the comments.
And I really recommend starting with *default* k3s, do not look at any alternatives to cni, csi, networked storage - treat your first cluster as something that can spontaniously fail and don't bother keeping it clean learn as much as you can.
Once you have that, you can use great open-source k8s native controllers which take care of vast majority of requirements when it comes to self-hosting and save more time in the long run than it took to set up and learn these things.
Honerable mentions: k9s, lens(I do not suggest using it in the long-term, but UI is really good as a starting point), rancher webui.
PostgreSQL specifically: https://github.com/cloudnative-pg/cloudnative-pg If you really want networked storage: https://github.com/longhorn/longhorn
I do not recommend ceph unless you are okay with not using shared filesystems as they have a bunch of gotchas or if you want S3 without having to install a dedicated deployment for it.
1. Access to any extension you want and importantly ability to create your own extensions.
2. Being able to run any version you want, including being able to adopt patches ahead of releases.
3. Ability to tune for maximum performance based on the kind of workload you have. If it's massively parallel you can fill the box with huge amounts of memory and screaming fast SSDs, if it's very compute heavy you can spec the box with really tall cores etc.
Self hosting is rarely about cost, it's usually about control for me. Being able to replace complex application logic/types with a nice custom pgrx extension can save massive amounts of time. Similarity using a custom index access method can unlock a step change in performance unachievable without some non-PG solution that would compromise on simplicity by forcing a second data store.
I did this for just under two years, and I've lost count of how many times one or more of the nodes went down and I had to manually deregister it from the cluster with repmgr, clone a new vm and promote a healthy node to primary. I ended up writing an internal wiki page with the steps. I never got it: if one of the purposes of clusters is having higher availability, why did repmgr not handle zombie primaries?
Again, I'm probably just an idiot out of my depth with this. And I probably didn't need a cluster anyway, although with the nodes failing like they did, I didn't feel comfortable moving to a single node setup as well.
I eventually switched to managed postgres, and it's amazing being able to file a sev1 for someone else to handle when things go down, instead of the responsibility being on me.
Overall, a good experience. Very stable service and when performance issues did periodically arise, I like that we had full access to all details to understand the root cause and tune details.
Nobody was employeed as a full-time DBA. We had plenty of other things going on in addition to running PostgreSQL.
https://github.com/vitabaks/autobase
automates the deployment and management of highly available PostgreSQL clusters in production environments. This solution is tailored for use on dedicated physical servers, virtual machines, and within both on-premises and cloud-based infrastructures.
I had a single API endpoint performing ~178 Postgres SQL queries.
Setup Latency/query Total time
-------------------------------------------------
Same geo area 35ms 6.2s
Same local network 4ms 712ms
Same server ~0ms 170ms
This is with zero code changes, these time shavings are coming purely from network latency. A lot of devs lately are not even aware of latency costs coming from their service locations. It's crazy!What went so wrong during the past 25 years?
I have a cron sh script to backup to S3 (used to be ftp).
It's not "business grade" but it has also actually NEVER failed. Well once, but I think it was more the container or a swarm thing. I just destroyed and recreated it and it picked up the same volume fine.
The biggest pain point is upgrading as Postgresql can't upgrade the data without the previous version installed or something. It's VERY annoying.
[1] https://docs.cloud.google.com/compute/docs/disks/hd-types/hy... [2] https://docs.cloud.google.com/compute/docs/disks/create-snap...
Standard Postgres compiled with some AWS-specific monitoring hooks
A custom backup system using EBS snapshots
Automated configuration management via Chef/Puppet/Ansible
Load balancers and connection pooling (PgBouncer)
Monitoring integration with CloudWatch
Automated failover scripting
I didn't know RDS had PgBouncer under the hood, is this really accurate?The problem i find with RDS (and most other managed Postgres) is that they limit your options for how you want to design your database architecture. For instance, if write consistency is important to you want to support synchronous replication, there is no way to do this in RDS without either Aurora or having the readers in another AZ. The other issue is that you only have access to logical replication, because you don't have access to your WAL archive, so it makes moving off RDS much more difficult.
I also self-host my webapp for 4+ years. never have any trouble with databases.
pg_basebackup and wal archiving work wonder. and since I always pull the database (the backup version) for local development, the backup is constantly verified, too.
Managed database services mostly automate a subset of routine operational work, things like backups, some configuration management, and software upgrades. But they don't remove the need for real database operations. You still have to validate restores, build and rehearse a disaster recovery plan, design and review schemas, review and optimize queries, tune indexes, and fine-tune configuration, among other essentials.
In one incident, AWS support couldn't determine what was wrong with an RDS cluster and advised us to "try restarting it".
Bottom line: even with managed databases, you still need people on the team who are strong in DBOps. You need standard operating procedures and automation, built by your team. Without that expertise, you're taking on serious risk, including potentially catastrophic failure modes.
Here are my gripes:
1. Backups are super-important. Losing production data just is not an option. Postgres offers pgdump which is not appropriate tool, so you should set up WAL archiving or something like that. This is complicated to do right.
2. Horizontal scalability with read replicas is hard to implement.
3. Tuning various postgres parameters is not a trivial task.
4. Upgrading major version is complicated.
5. You probably need to use something like pgbouncer.
6. Database usually is the most important piece of infrastructure. So it's especially painful when it fails.
I guess it's not that hard when you did it once and have all scripts and memory to look back. But otherwise it's hard. Clicking few buttons in hoster panel is much easier.
Is this actually the "common" view (in this context)?
I've got decades with databases so I cannot even begin to fathom where such an attitude would develop, but, is it?
Boggling.
A popular choice for smaller workloads has always been the Hetzner cloud which I finally poured into a ready-to-use Terraform module https://pellepelster.github.io/solidblocks/hetzner/rds/index....
Main focus here is a tested solution with automated backup and recovery, leaving out the complicated parts like clustering, prioritizing MTTR over MTBF.
The naming of RDS is a little bit presumptuous I know, but it works quite well :-)
Hardware failures and automated fail overs. That's a thing AWS and other managed hosting solutions do. Hardware will eventually fail of course. In AWS this would be a non event. It will fail over, a replacement spins up, etc. Same with upgrades, and other stuff.
Configuration complexity. The author casually outlines a lot of fairly complex design involving all sorts of configuration tweaks, load balancing, etc. That implies skills most teams don't have. I know enough to know that I have quite a bit of reading up to do if I ever were to decide to self host postgresql. Many people would make bad assumptions about things being fine out of the box because they are not experienced postgresql DBAs.
Vacations/holidays/sick days. Databases may go down when it's not convenient to you. To mitigate that, you need to have several colleagues that are equally qualified to fix things when they go down while you are away from keyboard. If you haven't covered that risk, you are taking a bit of risk. In a normal company, at least 3-4 people would be a good minimum. If you are just measuring your own time, you are not being honest or not being as diligent as you should be. Either it's a risk you are covering at a cost or a risk you are ignoring.
With managed hosting, covering all of that is what you pay for. You are right that there are still failure modes beyond that that need covering. But an honest assessment of the time you, and your team, put in for this adds up really quickly.
Whatever the reasons you are self hosting, cost is probably a poor one.
Standard Postgres compiled with some AWS-specific monitoring hooks
A custom backup system using EBS snapshots
Automated configuration management via Chef/Puppet/Ansible
Load balancers and connection pooling (PgBouncer)
Monitoring integration with CloudWatch
Automated failover scripting
Every company I've ever on boarded at, that hosted their own database, had number one, and a lot of TODOs around the rest. It's really hard! Honestly, it could be a full time job for a team. And that's more expensive than RDS.Of all the places I've worked that had the attitude "If this goes down at 3AM, we need to fix it immediately", there was only one where that was actually justifiable from a business perspective. I'm worked at plenty of places that had this attitude despite the fact that overnight traffic was minimal and nothing bad actually happened if a few clients had to wait until business hours for a fix.
I wonder if some of the preference for big-name cloud infrastructure comes from the fact that during an outage, employees can just say "AWS (or whatever) is having an outage, there's nothing we can do" vs. being expected to actually fix it
From this perspective, the ability to fix problems more quickly when self hosting could be considered an antifeature from the perspective of the employee getting woken up at 3am
With MongoDB you simply create a replicaset and you are done.
When planing a Postgres cluster, you need to understand replication options, potentially deal with Patroni. Zalandos Docker Spilo image is not really maintained, the way to go seems CloudNativePG, but that requires k8s.
I still don’t understand why there is no easy built-in Postgres cluster solution.
The real downside wasn't technical. The constant background anxiety you had to learn to live with, since the hosted news sites were hammered by the users. The dreaded SMS alerts saying the server was inaccessible (often due to ISP issues) or going abroad meant persuading one of your mates to keep an eye on things just in case, created a lot of unnecessary stress.
AWS is quite good. It has everything you need and removes most of that operational burden, so the angst is much lower, but the pricing is problematic.
This is when you use SQLite, not Postgres. Easy enough to turn into Postgres later, nothing to set up. It already works. And backups are literally just "it's a file, incremental backup by your daily backups already covers this".
[1]: https://matrix.org/blog/2025/07/postgres-corruption-postmort...
And for small projects, SQLite, rqlite, or etcd.
My logic is either the project is important enough that data durability matters to you and sees enough scale that loss of data durability would be a major pain in the ass to fix, or the project is not very big and you can tolerate some lost committed transactions.
A consensus-replication-less non-embedded database has no place in 2025.
This is assuming you have relational needs. For non-relational just use the native NoSQL in your cloud, e.g. DynamoDB in AWS.
A blog post that went into the details would be awesome. I know Postgres has some docs for this (https://www.postgresql.org/docs/current/backup.html), but it's too theoretical. I want to see a one-stop-shop with everything you'd reasonably need to know to self host: like monitoring uptime, backups, stuff like that.
Everything you do that isn't "normal" is another conversation you need to have with an auditor plus each customer. Those eat up a bunch of time and deals take longer to close.
Right or wrong, these decisions make you less "serious" and therefore less credible in the eyes of many enterprise customers. You can get around that perception, but it takes work. Not hosting on one of the big 3 needs to be decided with that cost in mind
Scripts could kick off health reports and trigger operations. Upgrades and recovery runbooks would be clearly defined and integration tested.
It would empower personal sovereignty.
Someone should make this in the open. Maybe it already exists, there are a lot of interesting agentops projects.
If that worked 60% of the time and I had to figure out the rest, I’d self host that. I’d pay for 80%+.
A lot of this comes down to devs not understanding infrastructure and infrastructure components and the insane interplay and complexity. And they don't care! Apps, apps apps, developers, developers, developers!
On the managerial side, it's often about deflection of responsibility for the Big Boss.
It's not part of the app itself it can be HARD, and if you're not familiar with things, then it's also scary! What if you mess up?
(Most apps don't need the elasticity, or the bells and whistles, but you're paying for them even if you don't use them, indirectly.)
Glad my employer is still one of the sane ones.
Interesting. Whoever wrote
https://news.ycombinator.com/item?id=46334990
didn't seem to be aware of that.
https://blog.notmyhostna.me/posts/what-i-wish-existed-for-se...
Is this really the state of our industry? Lol. Bunch of babies scared of the terminal.
To the author - on Android Chrome I seem to inevitably load the page scrolled to the bottom, footnotes area. Scrolling up, back button, click link again has the same results - I start out seeing footnotes. Might be worth a look.
Sometimes it is nice to simplify the conversation with non-tech management. Oh, you want HA / DR / etc? We click a button and you get it (multi-AZ). Clicking the button doubles your DB costs from x to y. Please choose.
Then you have one less repeating conversation and someone to blame.
Things will go wrong. And it's all your fault. You can't just blame AWS.
Also are we changing the definition of self hosting. Self hosting on Digital Ocean ?!
... but you can do a lot with just "a single VM and robust backup". PostgreSQL restore is pretty fast, and if you automated deployment you can start with it in minutes, so if your service can survive 30 minutes of downtime once every 3 years while the DB reloads, "downgrading" to "a single cloud VM" or "a single VM on your own hardware" might not be a big deal.
Disclaimer: there's no silver bullet, yadda yadda. But SQLite in WAL mode and backups using Litestream have worked perfectly for me.
I was disappointed alloy doesn't support timescaledb as a metrics endpoint. Considering switching to telegraf just because I can store the metrics on Postgres.