Can help with Question 1 only.
There's several approches to load balancing and failover (simpliest-first)
- DNS round robing (load balancing and failover)
- Dynamic DNS (failover)
- Proxies (Load balancing and failover)
- Local IP failover (failover)
- BGP Anycast (load balancing and failover)
DNS load balancing is simple: Say you have two (or more) servers with IPs 1.1.1.1 and 2.2.2.2.
To setup DNS load balancing, you create DNS records for your hostname, say www.example.com:
www.example.com. A 1.1.1.1
A 2.2.2.2
(Also, DNS server should be configured to serve this name in round-robin mode, but it's usually the default anyway).
Now each DNS request to www.example.com will be replied with two addresses, in a pseudo-random order, and thus your clients are likely to equally spread between the servers.
There's no need to update records frequently, once it setup it works forever. It also provides some degree of failover, as if one the hosts is down, browsers will time-out and then try the second host, BUT there may be considerable delay and users won't like it.
Dynamic DNS. Possible addition to 1., is once given host fails, dynamically update DNS records and remove referral to the failed host, but lots of caching in DNS system causes that there will be some period of degraded behavior I mentioned above. Using very low TTL improves situation but still there's caching inside client OS/browser that won't regard TTL, also some ISPs don't disregard low TTLs too. Anyhow, bottomline - it's very easy and affordable way to achieve balancing and basic failover.
Proxies. Simple and popular for load balancing. To eliminate single-point-of-failure you need to combine it with other approach(es).
IP Failover. As addition to 2., to cope with failure of proxy itself, TWO proxies used in "IP failover" setup - basic idea is to have one IP address that normally comes up on host1 but once it fails, host2 detects it and the IP comes up on host2. Look for linux "heartbeat" project. (You may also failover servers themselves without proxies, but you won't have balancing). Normally both PCs have to be on the same subnet (same datacenter).
Anycast. idea is to advertise routes to single IP address (actually single subnet) in couple of physical locations. You need your own /24 subnet, and ability to configure BGP. Anycast often used for DNS servers. There's difficulties with persistent TCP connections and thus more easily fits UDP and DNS but still sometimes used for web too.
That's the basic ideas. As you see, every method have limitations and complications. And if it's not complicated enough, you can build any imaginable combination of the above approaches :)
Best Answer
In a 3 server setup I would personally consolidate the db server and any other backend services to 1 machine and use the 2 lesser VMs as frontend nodes. DRBD could be used to failover a "primary" IP between your nodes and can also be used to replicate shared storage between the systems. Nginx would be used to proxy webtraffic between the 2 web frontends.