One thing to bear in mind is that by default, DNS lookups use UDP. If the response is larger than can fit in a single datagram, as many as will fit are returned and the TC (truncated) bit is set in the header.
The requester can choose to work with what was returned, or re-attempt the query using TCP.
Caching DNS servers are not supposed to cache truncated responses, as they don't know how complete the set of records returned is (the response doesn't say "I am giving you 12 of 28 records").
So the maximum number of records is a factor of how much you can stick in a UDP datagram. Remember that the response needs to include the authority section, which will vary in size based on the SOA record for the zone.
If you are using CNAME records, that will also increase the size of the response, as you get back the CNAME and the A record of the thing pointed to.
Your best bet is to play around with various numbers of A records using dig or "host -v" to see when the query crosses the max size of a UDP response.
When I use the term "DNS Round Robin" I generally mean in in the sense of the "cheap load balancing technique" as OP describes it.
But that's not the only way DNS can be used for global high availability. Most of the time, it's just hard for people with different (technology) backgrounds to communicate well.
The best load balancing technique (if money is not a problem) is generally considered to be:
- A Anycast'ed global network of 'intelligent' DNS servers,
- and a set of globally spread out datacenters,
- where each DNS node implements Split Horizon DNS,
- and monitoring of availability and traffic flows are available to the 'intelligent' DNS nodes in some fashion,
- so that the user DNS request flows to the nearest DNS server via IP Anycast,
- and this DNS server hands out a low-TTL A Record / set of A Records for the nearest / best datacenter for this end user via 'intelligent' split horizon DNS.
Using anycast for DNS is generally fine, because DNS responses are stateless and almost extremely short. So if the BGP routes change it's highly unlikely to interrupt a DNS query.
Anycast is less suited for the longer and stateful HTTP conversations, thus this system uses split horizon DNS. A HTTP session between a client and server is kept to one datacenter; it generally cannot fail over to another datacenter without breaking the session.
As I indicated with "set of A Records" what I would call 'DNS Round Robin' can be used together with the setup above. It is typically used to spread the traffic load over multiple highly available load balancers in each datacenter (so that you can get better redundancy, use smaller/cheaper load balancers, not overwhelm the Unix network buffers of a single host server, etc).
So, is it true that, with multiple data centers
and HTTP traffic, the use of DNS RR is the ONLY
way to assure high availability?
No it's not true, not if by 'DNS Round Robin' we simply mean handing out multiple A records for a domain. But it's true that clever use of DNS is a critical component in any global high availability system. The above illustrates one common (often best) way to go.
Edit: The Google paper "Moving Beyond End-to-End Path Information to Optimize CDN Performance" seems to me to be state-of-the-art in global load distribution for best end-user performance.
Edit 2: I read the article "Why DNS Based .. GSLB .. Doesn't Work" that OP linked to, and it is a good overview -- I recommend looking at it. Read it from the top.
In the section "The solution to the browser caching issue" it advocates DNS responses with multiple A Records pointing to multiple datacenters as the only possible solution for instantaneous fail over.
In the section "Watering it down" near the bottom, it expands on the obvious, that sending multiple A Records is uncool if they point to datacenters on multiple continents, because the client will connect at random and thus quite often get a 'slow' DC on another continent. Thus for this to work really well, multiple datacenters on each continent are needed.
This is a different solution than my steps 1 - 6. I can't provide a perfect answer on this, I think a DNS specialist from the likes of Akamai or Google is needed, because much of this boils down to practical know-how on the limitations of deployed DNS caches and browsers today. AFAIK, my steps 1-6 are what Akamai does with their DNS (can anyone confirm this?).
My feeling -- coming from having worked as a PM on mobile browser portals (cell phones) -- is that the diversity and level of total brokeness of the browsers out there is incredible. I personally would not trust a HA solution that requires the end user terminal to 'do the right thing'; thus I believe that global instantaneous fail over without breaking a session isn't feasible today.
I think my steps 1-6 above are the best that are available with commodity technology. This solution does not have instantaneous fail over.
I'd love for one of those DNS specialists from Akamai, Google etc to come around and prove me wrong. :-)
Best Answer
Here's an implementation of weighted round robin DNS: http://www.mccartney.ie/wordpress/2008/08/19/wrr-dns-with-powerdns/
Returning multiple answers is dangerous thanks to implementations of RFC3484: http://drplokta.livejournal.com/109267.html