Nginx reverse proxy, ssl offloading, caching and pagespeed all in one.

mod-pagespeedmod-proxynginxreverse-proxy

We currently host everything on windows IIS 7 servers. We just moved to Azure, and like many clouds hosting windows is more expensive than hosting linux. Azure has its own limitations for virtual machines (as compared to AWS) since you can really only bind one public ip address to a web service. To overcome this limitation, and to build some fail-over, and caching functionality we are looking at NginX as a load balancer, reverse proxy to put in front of our IIS servers.

Recently we learned that Google-Pagspeed for Apache is also available for NginX. Based on the online tutorials and our little experience we figure we can get all the benefits of NginX by using 3 separate layers of NginX servers (see below). My question is how can this be done in just one virtual host?

Here is an example of what we can probably setup now:

  • Nginx1 – This server sits in front of Nginx2 and does SSL offloading and caches static content to the linux server which it is running on. It would cache static resources to both disk and ram.

  • NginX2 – This NginX server sits in front of Nginx3 and includes the Google Mod Pagespeed for NginX. The goal for this server is to minfiy and combine scripts and optimize images. Since these operations can be expensive we have placed NginX1 in front of this server.

  • NginX3 – (HA Proxy would also work if we are stuck with 3 NginX instances running at once). This is the last NginX server running on the Linux box. Its job is just to act as a reverse proxy for our IIS farm. The idea is if we bring up another IIS server (currently we only have two), then we just add it to this NginX's configuraiton and Nginx does the load balancing. We still use Sticky sessions (because of some sloppy asp / asp.net code) but at lease we can divide up the workload between different virtual machines, and scale out by adding a machine in our "IIS Farm" (not using IIS farm feature).

  • 2X IIS Servers – These are just iis servers (all identical) which have multiple sites configured and sites are loaded to them via FTP or Git.

Questions:

  1. Would we be able to forgo having 3 seperate NginX configuraitons (virtual hosts) and just combine everythng into one? If so any eample of how to configure NginX for this in a single configuraiton would be AWESOME!
  2. How can we keep the goals above and pass the X forwarded for ip address to the back end iis machines?
  3. What would be suggestions if any as a fallback in case our NginX server fails? With the setup above it is our single point of failure. Would having two linux servers with the setup above, with a combination of DNS failover or just having dual DNS records fix the issue of having our NginX server go down?
  4. How resource hungry is NginX? Is it safe to assume that an Azure Extra small "Micro" Instance with only 768mb of ram could handle 50-100 concurrent visitors?

Thanks in advance!!!!!


References:

NginX Pagespeed:
http://ngxpagespeed.com/ngx_pagespeed_example/

NginX Reverse Proxy:
http://www.andrewparisio.com/2011/02/how-to-create-reverse-https-failover.html

Best Answer

  1. yes it is absolutely possible to combine these nginx servers into one. Just use proxy_pass.
  2. See proxy set header in http block
  3. Double NGINX machines (same config) in combination with DNS round robin (or put something like an AWS load balancer in front, don't know if you have something like it on Azure)
  4. At least. You should think in 1000's of connections with NGINX. Your IIS would probably be the bottleneck. (I don't take into account the extra load Pagespeed processing will put on this setup, you'll have to give it a try and find out)

First in your http block put:

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    upstream pagespeed {
          server 127.0.0.1:8081;    # weight=10 max_fails=3 fail_timeout=30s;
    }
    upstream iis {
          server 192.168.0.100:80;    weight=10 max_fails=3 fail_timeout=10s;
          server 192.168.0.101:80;    weight=10 max_fails=3 fail_timeout=10s;
    }

Then you will only need the following server blocks (stripped down)

    server {
    listen       443; #SSL offloader
    server_name  example.com;

         #do some ssl things here

    location / {
    proxy_pass   http://pagespeed; #proxy to pagespeed on same nginx    
    }
}
   server {
        listen       8081;
        server_name  example.com;

         #do some pagespeed things here
         #pagespeed on
    location / {
    proxy_pass   http://iis; # proxy to 2 backend IIS servers
    }
}

Use unix socket in your upstream config for even better performance. Also you can put the pagespeed cache in Memcached.