Nginx – Why does nginx reverse proxy incoming bandwidth increase and surpass outgoing by far when proxy_buffering is enabled

bandwidthnginxPROXYreverse-proxyweb-server

This is my current configuraiton…

proxy_buffering on;
proxy_buffer_size 32k;
proxy_buffers 128 32k;
proxy_send_timeout 20;
proxy_read_timeout 20;
#proxy_max_temp_file_size 1m;
proxy_temp_path /dev/shm/nginx_proxy_buffer;
proxy_pass $url;

I used not to have proxy_buffering enabled, however my new servers have very high %age of software interrupts (%si), so the CPU becomes a bottleneck when my reverse proxy handles about 300 mbit.

With proxy buffering the software interrupt drops and I get transfer rates of almost the full gbit which the servers are connected to.

However the incoming bandwidth is almost double the outgoing bandwidth! The rates fulctuate of course, but on average i have almost double the incoming rate which I don't understand. This is very bad because my 95%ile billing takes the max of in/out …

It is my understanding that if a user cancels a download the data that has already been transferred from the source server into the buffer will get lost, which would result in this behaviour. However it is absurd that this occurs and causes 100% overrage…

enter image description here

Any input is appriciated!

Best Answer

Do you have gzip enabled for clients? That could account for the difference since nginx <-> backend connections aren't compressed by default (I can't remember if the recent http/1.1 backend support and gunzip filter module allow you to safely enable gzip between nginx and the backend server or not).

EDIT: This doesn't explain why you don't see this behavior with proxy_buffering disabled, though. Maybe more clients disconnect if they have to wait?