NGINX SSL termination slow

nginxPROXYssl

I am using NGINX as a reverse web proxy to an upstream IIS webserver. I am proxy_pass'ing to an https binding on the IIS server so I understand SSL is being encrypted/decrypted twice- but it shouldn't be as slow as it is.

NGINX version 1.4.4

OpenSSL version 1.0.1f

Things I've tried:

  • Tweaking SSL cipher list
  • Recompiling NGINX with debugging on and & upgrading to latest OpenSSL
  • Inspecting access/error/debug logging output
  • Playing w/ various NGINX directives

Using ApacheBench for testing:

Requests per second:

  • HTTP (through nginx) = 3312 RPS
  • HTTPS (through nginx) = 273 RPS <– ???
  • HTTP (direct to backend) = 4237 RPS
  • HTTPS (direct to backend) = 1349 RPS

Example ApacheBench output:

abs -c 100 -n 1000 h ttps://nginxtest1.mydomain.com/
Benchmarking nginxtest1.mydomain.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software:        nginx/1.4.3
Server Hostname:        nginxtest1.mydomain.com
Server Port:            443
SSL/TLS Protocol:       TLSv1,ECDHE-RSA-AES256-SHA,2048,256
Document Path:          /
Document Length:        659 bytes
Concurrency Level:      100
Time taken for tests:   3.493 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      917000 bytes
HTML transferred:       659000 bytes
Requests per second:    286.27 [#/sec] (mean) <-----?????
Time per request:       349.320 [ms] (mean)
Time per request:       3.493 [ms] (mean, across all concurrent requests)
Transfer rate:          256.36 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       39  149  48.4    148     340
Processing:    47  180 179.2    163    3156
Waiting:       25  133 182.8    107    3130
Total:        145  330 181.7    320    3213

Percentage of the requests served within a certain time (ms)
  50%    320
  66%    331
  75%    339
  80%    355
  90%    431
  95%    502
  98%    529
  99%    533
 100%   3213 (longest request)

nginx.conf:

worker_processes  auto;

events {

 worker_connections  1024;
 debug_connection 192.168.2.98;

}

http {

   include       mime.types;
    default_type  application/octet-stream;

   sendfile        on;

   keepalive_timeout  65;
   lingering_time 240;
   client_max_body_size 100m;

   ssl_session_cache       shared:SSL:10m;


   include /usr/local/nginx/conf/srvnj04.conf;

   gzip on;
   gzip_disable "msie6";
   gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

}

srvnj04.conf:

server {

   listen       80;
   server_name nginxtest1.mydomain.com;

     log_format custom4 '$remote_addr '
                        'Conn: $connection '                 #connection serial number
                        'Conn reqs: $connection_requests'    #the current number of requests made through a connection
                        '$status '
                        '$http_referer '
                        #'$body_bytes_sent '
                        '$request '
                        #'"$http_user_agent" '
                        'Processing: $request_time '
                        'Response: $upstream_response_time ';
                        #'$bytes_sent '
                        #'$request_length';


   access_log /var/log/nginx/srvnj04_access.log custom4;
   error_log /var/log/nginx/srvnj04_error.log debug;

    ssl                  off;

    location / {

            #empty_gif;
            proxy_pass http://nginxtest2.appsrv008.mydomain.com;

    }
}

upstream https_backend {

    server 192.168.2.4:444;

    #keepalive 32;
    keepalive 128;
}


server {

listen       443 ssl so_keepalive=0h:5m:0;
server_name  nginxtest1.mydomain.com;
keepalive_timeout 54;
keepalive_requests 128;

error_log /var/log/nginx/nginx_debug_srvnj04.log debug;

ssl                  on;

ssl_certificate ssl/star_mydomain_net_sol_CA_srvnj04b.cer;
ssl_certificate_key ssl/star_mydomain_net_sol.key;
ssl_dhparam ssl/dhparam.pem;

ssl_protocols  SSLv3 TLSv1.2 TLSv1 TLSv1.1;

ssl_ciphers RC4:HIGH:!aNULL:!MD5:!kEDH;

ssl_prefer_server_ciphers   on;

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

location / {

#proxy_pass https://nginxtest1.mydomain.com;
proxy_pass https://https_backend;    

}

}

I have been looking into this for about a week now- any insight as to what might be wrong would be greatly appreciated.

As Requested, AB output:

HTTPS

# abs -c 100 -n 1000 #https://nginxtest1.mydomain.com/
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking nginxtest1.mydomain.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/1.4.3
Server Hostname:        nginxtest1.mydomain.com
Server Port:            443
SSL/TLS Protocol:       TLSv1,ECDHE-RSA-AES256-SHA,2048,256

Document Path:          /
Document Length:        334 bytes

Concurrency Level:      100
Time taken for tests:   3.458 seconds
Complete requests:      1000
Failed requests:        0
Non-2xx responses:      1000
Total transferred:      503000 bytes
HTML transferred:       334000 bytes
Requests per second:    289.17 [#/sec] (mean)
Time per request:       345.820 [ms] (mean)
Time per request:       3.458 [ms] (mean, across all concurrent requests)
Transfer rate:          142.04 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       54  186  73.7    174     509
Processing:    25  143  35.8    143     227
Waiting:       22   79  25.7     80     164
Total:         91  329  80.5    316     663

Percentage of the requests served within a certain time (ms)
  50%    316
  66%    318
  75%    323
  80%    334
  90%    404
  95%    542
  98%    568
  99%    595
 100%    663 (longest request)

HTTP

# abs -c 100 -n 1000 http://nginxtest1.mydomain.com/
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking nginxtest1.mydomain.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/1.4.3
Server Hostname:        nginxtest1.mydomain.com
Server Port:            80

Document Path:          /
Document Length:        659 bytes

Concurrency Level:      100
Time taken for tests:   0.772 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      917000 bytes
HTML transferred:       659000 bytes
Requests per second:    1295.26 [#/sec] (mean)
Time per request:       77.204 [ms] (mean)
Time per request:       0.772 [ms] (mean, across all concurrent requests)
Transfer rate:          1159.92 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.5      0       2
Processing:    33   68  27.7     53     296
Waiting:       12   65  28.4     53     145
Total:         34   69  27.7     54     297

Percentage of the requests served within a certain time (ms)
  50%     54
  66%     86
  75%     94
  80%     97
  90%    103
  95%    108
  98%    113
  99%    122
 100%    297 (longest request)

Best Answer

I understand SSL is being encrypted/decrypted twice- but it shouldn't be as slow as it is.

Why not? Please note, that it's not only double encryption/decryption, it also adds one more TCP+TLS handshake per request. To minimize handshake overhead you should use keepalive connections: http://nginx.org/r/keepalive

But in general what you're doing is a bad practise, you should use IPSec tunnel.

Related Topic