At least then your pain is their pain, and they are incentivesed to prevent problems and fix them quickly.
However, they ALWAYS pick up the phone on the 3rd ring with a capable, on call linux sysadmin with good general DB, services, networking, DNS, email knowledge.
But reliability at the holy grails of 4 and 5 nines (99.99%, 99.999% uptime) means ever greater investment - geographically dispersing your service, distributed systems, dealing with clock drift, multi master, eventual consistency, replication, sharding.. it’s a long list.
Questions to ask: could you do better yourself - with the resources you have? Is it worth the investment of a migration to get there? Whats the payoff period for that extra sliver of uptime? Will it cost you in focus over the longer term? Is the extra uptime worth all those costs?
I ended up downloading the entire volume, setting up my own docker container locally, exporting it, creating a new cluster (on the latest major patch).
Lost most of my day yesterday
This happens with managed services and I understand the frustration, but vendors are just as fallible as the rest of us and are going to have wonky behaviour and outages, regardless of the stability they advertise. This is always part of build vs buy, buy doesn't always guarentee a friction free result.
It happens with the big cloud providers as well, I've spent hours with AWS chasing why some VMs are missing routing table entries inside the VPC, or on GCP we had to just ban a class of VMs because the packet processing was so bad we couldn't even get a file copy to complete between VMs.
> I chose managed services specifically to avoid ops emergencies
You may not be spending enough time on HN reading all the horror stories =PThe benefit of a managed service isn't that it doesn't go down; though it probably goes down less than something you self-manage, unless you're a full-time SRE with the experience to back it.
The benefit of a managed service is you say: "It's not my problem, I opened a ticket, now I'm going to get lunch, hope it's back up soon."
Redundancy across failure domains: We now run critical stateful workloads with connection pooling that can failover between private and public endpoints. Yes, it's more complexity, but it's complexity we control. Synthetic monitoring for managed services: We probe not just our app, but also the managed service endpoints from multiple network paths. Catches these "infrastructure layer" failures faster. Backup connectivity paths: For managed DBs, we keep both private VPC and public (firewalled) endpoints configured. If one breaks, we can switch in minutes via config.
The DaemonSet workaround is... alarming. It's essentially asking you to run production-critical infrastructure code from an untrusted source because their managed platform has a known bug with no ETA. Your point about trading failure modes is spot on. Managed services are still worth it for small teams, but the value prop is "fewer incidents" not "no incidents," and when they do happen, your MTTR is now bounded by vendor response time instead of your team's skills. Did DO at least provide the DaemonSet from an official source, or was it literally "here's a random GitHub link"?
Same thought as you.. I just didn't want to figure out and manage MySQL-with-failover myself so I switched their managed solution a year or two ago and my bill went up like 300% or more (was running fine on a ~$12 or maybe $24 droplet + $5 volume but now costs, I don't even remember, $150 or so).
As far as dbs go, I believe Amazon RDS is quite reliable. I think Render uses it under the hood.
You could also consider AWS ECS directly with RDS.
I find less things that can go wrong with VMs. I can log and monitor them better, and increase resources as I see what's going on per machine.
Docker was smearing all the machines together. For early testing, its great due to speed of redeploy and cleaning state. But once you want to start tuning, docker is pretty hard to get right.
Maybe I'm not a great systems engineer. But I do like my lower complexity systems. 1 service per machine is, in my opinion easier to get right.
If the word "production" is suppose to really mean something to you, move your workload to Google Cloud, or move it to AWS, or on https://cast.ai
Disclaimer: I have no commercial affiliation with Cast AI.
“There is no cloud, it’s just somebody else’s computer”
etc etc…