Microsoft has a lot of documentation about their somewhat new load balancing and request routing module for IIS7 located here: http://blogs.iis.net/bills/archive/2009/02/16/iis7-request-routing-and-load-balancing-module-released.aspx, but I'd like to know if anyone has experience using it in production. What are a few pros / cons to using this module instead of another solution such as HAProxy?
IIS7 Load Balancing – Using Request Routing and Load Balancing Module
iis-7load balancingPROXY
Related Solutions
There are two main downsides:
Your load isn't evenly distributed. Sticky sessions will stick, hence the name. While initial requests will be distributed evenly, you might end up with a significant number of users spending more time than others. If all of these are initially set to a single server, that server will have much more load. Typically, this isn't really going to have a huge impact, and can be mitigated by having more servers in your cluster.
Proxies conglomerate users into single IP's, all of which would get sent to a single server. While that typically does no harm, again other than increasing individual server loads, proxies can also operate in a cluster. A request into your F5 from such a system would not necessarily be sent back to the same server if the request comes out of a different proxy server in their proxy cluster.
AOL was at one point using proxy clusters, and really screwed with load balancers and sticky sessions. Most load balancers will now offer sticky sessions based off of C-Class net ranges, or with the case of F5, cookie based sticky sessions which store the end node in a web request cookie.
While cookie based sessions should works, I've had some problems with them, and typically choose IP based sessions. BIG HOWEVER: I'm mostly working on internal apps - DMZ milage might vary.
All that being stated, we've had some great success with sites running behing F5 with sticky sessions and In-Proc sessions.
You also might want to take a look at one of the in memory distributed caching systems like Memcached or Velocity for an alternative to session being stored in SQL or the out of proc memory service. You get close to the speed of in-proc memory with the ability to run it across several servers.
One useful benefit of "L7" like haproxy is being able to use cookies to keep the same browser hitting the same backend server. This make debugging client hits much easier.
L4 balancing may bounce a single user around on several backend servers. (which in certain cases may be advantageous, but in a debugging/profiling sense, using "L7" is much more valuable.)
EDIT: There's also a potential speed advantage of using HTTP balancing. With keep-alives clients can establish a single TCP session to your balancer and then send many HITs without needing to re-establish new TCP sessions (3-way handshake). Similarly many LBs maintain keep-alive sessions to back end systems removing the need to do the same handshake on the back end.
Strict TCP load balancing may not accomplish both of these as easily.
/* FWIW: I wouldn't say "L7" or "L4", I would say HTTP or TCP. But I'm a stickler for avoiding using the OSI to describe things it doesn't match up with well. */
I think fundamentally if you're not sure what to deploy, go with what feels simple and natural to you. Test it (use apache bench?) and make sure it works for your needs. To me HTTP LB is more natural.
Best Answer
We are using it in production for a company info website. And we havnt had any problem at all with it yet. And it works smooth to take down servers and loadbalancing also work nice. Using least reponsetime so one of the server is getting some more requests. We are going to move one of our ecommerce stores also that have alot more trafic then the company info site. So we will see how it works under more heavy load. But our tests have showed that it should handle it without problems