Nginx – cloudflare + nginx with limit_req and limit_conn

cloudflarenginx

For the sake of simplicity, let's say I have a web server running nginx serving a single php file (with a "hello world" message) via php5-fpm.

Let's say that the server is behind cloudflare and that all requests to my server come in via cloudflare.

Under nearly default configuration, all ip's reported by nginx are cloudflare ip's, therefore we use the realip_module and follow this link to set the real ip from cloudflare.

My next step is to limit both the connections and requests to this php file to 1 request per second, per user ip (but allow unlimited requests from cloudflare themselves).

According to this answer it's safe to use the $binary_remote_addr to enforce limits, because after using the realip_module, nginx will rewrite the ip to whatever ip was provided on the CF-Connecting-IP header.

So, I start we this configuration:

limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=1r/s;
limit_conn conn_limit_per_ip 1;
limit_req zone=req_limit_per_ip burst=1 nodelay;

Meaning that I allow only 1 http connection and 1 req/sec, with 1 extra request within that second allowed (I'm feeling generous here).

I know that $binary_remote_addr includes the user ip as reported by cloudflare, but the connections and requests are still all arriving to the server via cloudflare (hence i cannot use iptables for this).

My understanding of limit_conn and limit_req is that it will show a 503 status page to any user that exceeds these limits.

Case 1:

To make things simple, let's say 300 users in the same region and using the same cloudflare edge location access the site within the same second.

Because cloudflare reports different ip addresses for everyone and everyone is making a single request, no one will ever see that 503 status page and 300 requests will be allowed via cloudflare.

Case 2:

Let's say 3 people access my page within that second. From those, 2 users did 1 single request while the 3rd user is using ab (apache benchmark) to make 10 simultaneous requests.

Because cloudflare reports different ip addresses for everyone, I can assume the first 2 users will see my page without problems while the 3rd person will request twice my page successfully (because i'm using burst + nodelay) and then get a 503 status page for the rest of the requests within that second. Next second it will repeat the same, effectively limiting his requests to 2 req/sec.

Concerns:

Because all those 503 requests are still being served via cloudflare, will other users start seeing cloudflare's messages saying that the origin is not available (click here to retry a live version button), because previous requests failed?

I know that if the server is down (no network), cloudflare messages will be shown to everyone and it will take a few seconds to disappear even when the server is up again.

Will those "server offline" messages from cloudflare be shown to users when my nginx starts replying with error 503 to that particular bad user?

Best Answer

HTML (which is the output of PHP) is not cached by default, so they will see the response to their own request unless you're caching on nginx. To be super sure set up a page rule that specifies "standard caching", or if you want to be paranoid you can set it to no caching - but then you have to be careful with your matching pattern otherwise CSS/JS isn't cached.

I don't believe 503 will be shown to other users. You can ask their support - they're reasonably responsive even on the free plans.