- DNS is something you rarely change that has costly consequences if you mess it up: It can bring down an entire domain and keep it down until TTL passes.
If you set your TTL to an hour, it raises the costs of DNS issues a lot: A problem that you fix immediately turns into an hour-long downtime. A problem that you don't fix on the first attempt and have to iteratively try multiple fixes turns into an hour-per-iteration downtime.
Setting a low TTL is an extra packet and round-trip per connection; that's too cheap to meter [1].
When I first started administering servers I set TTL high to try to be a good netizen. Then after several instances of having to wait a long time for DNS to update, I started setting TTL low. Theoretically it causes more friction and resource usage but in practice it really hasn't been noticeable to me.
[1] For the vast majority of companies / applications. I wouldn't be surprised to learn someone somewhere has some "weird" application where high TTL is critical to their functionality or unit economics but I would be very surprised if such applications were relevant to more than 5% of websites.
- The irony here is that news.ycombinator.com has a 1 second TTL. One DNS query per page load and they don't care, yay!
by compumike
1 subcomments
- The big thing that articles like this miss completely is that we are no longer in the brief HTTP/1.0 era (1996) where every request is a new TCP connection (and therefore possibly a new DNS query).
In the HTTP/1.1 (1997) or HTTP/2 era, the TCP connection is made once and then stays open (Connection: Keep-Alive) for multiple requests. This greatly reduces the number of DNS lookups per HTTP request.
If the web server is configured for a sufficiently long Keep-Alive idle period, then this period is far more relevant than a short DNS TTL.
If the server dies or disconnects in the middle of a Keep-Alive, the client/browser will open a new connection, and at this point, a short DNS TTL can make sense.
(I have not investigated how this works with QUIC HTTP/3 over UDP: how often does the client/browser do a DNS lookup? But my suspicion is that it also does a DNS query only on the initial connection and then sends UDP packets to the same resolved IP address for the life of that connection, and so it behaves exactly like the TCP Keep-Alive case.)
by GuinansEyebrows
0 subcomment
- i was taught this as a matter of professional courtesy in my first job working for an ISP that did DNS hosting and ran its own DNS servers (15+ years ago). if you have a cutover scheduled, lower the TTL at $cutover_time - $current_ttl. then bring the TTL back up within a day or two in order to minimize DNS chatter. simple!
of course, as internet speeds increase and resources are cheaper to abuse, people lose sight of the downstream impacts of impatience and poor planning.
- I usually set mine to between an hour and a day, unless I'm planning to update/change them "soon" ... though I've been meaning to go from a /29 to /28 on my main server for a while, just been putting off switching all the domains/addresses over.
Maybe this weekend I'll finally get the energy up to just do it.
- I guess I'm not sure I understand the solution. I use a low value (idk 15 minutes maybe?) because I don't have a static ip and I don't want that to cause issues. It's just me to my home server so I'm not adding noticable traffic like a real company or something, but what am I supposed to do? Is there a way for me to send an update such that all online caches get updated without needing to wait for them to time out?
- I used to get more excited about this but even when browsers don't do a DNS prefetch (or even a complete preload) the latency for lookups is usually still so low on the list of performance impacting design decisions that it is unlikely to ever outweigh even the slightest advantages (or be worth correcting misperceived advantages) until we all switch to writing really really REALLY optimized web solutions.
- Could it be because folks set it low for initial propagation and then never change it back after they set it up.
by deceptionatd
0 subcomment
- I have mine set low on some records because I want to be able to change the IP associated with specific RTMP endpoints if a provider goes down. The client software doesn't use multiple A records even if I provide them, so I can't use that approach; and I don't always have remote admin access to the systems in question so I can't just use straight IPs or a hostfile.
by nubinetwork
1 subcomments
- > However, no one moving to a new infrastructure is going to expect clients to use the new DNS records within 1 minute, 5 minutes or 15 minutes
When you run a website that receives new POSTed information every 60 seconds, you sure do. ;)
by jurschreuder
1 subcomments
- It's because updating dns does not work reliably so it's always a lot if trail and error which you can only see after the cache updates
- I don't understand why the author doesn't consider load balancing and failover legitimate use cases for low ttl. Cause it wrecks their argument?
- I've never changed the default, Squarespace, Godaddy, Cloudflare, Porkbun, all have been an hour or so.
by effnorwood
0 subcomment
- Sometimes they need to be low if you use the values to send messages to people.
by 1970-01-01
1 subcomments
- (2019)
- DNS TTLs are a terrible method of directing traffic because you can't rely on clients to honor it.