My website started loading for like 4-5 seconds (in peak hours) – it is almost not usable. Traffic: 40MB/s out, 6MB/s in. (almost 95% of out is downloading files 0.5-2GB large). Over 100 simultaneous connections. Machine works fine and responds without a problem, I am uploading via website very slowly, about 50KB/s and downloading 50KB/s while via FTP everything is fine and goes to hunders of KB/s down and up, so I think the problem may be somewhere in the configuration of nginx, php-fpm or mysql. But I actually have no idea how to debug this problem. I've googled and increased values to hold simultaneously thousands of clients, but the problem is still the same.
netstat -na |grep :80 |wc -l
250 //if it is something like 150 AND
netstat -an | grep 80 | grep ESTA | wc
150 //this is less than 100, then it is okay, otherwise website is loading 3 times longer than usually
nginx.conf:
user www-data;
worker_processes 8;
pid /var/run/nginx.pid;
worker_rlimit_nofile 200000;
events {
worker_connections 32768;
multi_accept on;
use epoll;
}
http {
access_log off;
limit_conn_zone $binary_remote_addr zone=conn:10m;
#limit_req_zone $binary_remote_addr zone=req:10m rate=250r/s;
#limit_req zone=req burst=20 nodelay;
upload_progress uploads 5m;
upload_progress_json_output;
sendfile on;
send_timeout 60s;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 20;
client_max_body_size 10G;
client_body_buffer_size 256k;
types_hash_max_size 2048;
server_tokens off;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
#access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log crit;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
mysite-virtual.conf
location ~ \.php$ {
#limit_req zone=req;
fastcgi_buffer_size 128k;
fastcgi_busy_buffers_size 256k;
fastcgi_buffers 256 16k;
fastcgi_pass 127.0.0.1:9000;
fastcgi_temp_file_write_size 256k;
include fastcgi_params;
}
/etc/php5/fpm/pool.d/www.conf
pm = dynamic
pm.max_children = 50
pm.start_servers = 25
pm.min_spare_servers = 25
pm.max_spare_servers = 50
Sysctl tunning
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_congestion_control = cubic
nofile limit
root hard nofile 40000
root soft nofile 40000
www-data hard nofile 40000
www-data soft nofile 40000
Mysql status — aborted connections?
Connections ΓΈ per hour %
max. concurrent connections 25 --- ---
Failed attempts 0 0.00 0.00%
Aborted 21 4.19 0.08%
Total 25 k 5,040.80 100.00%
In peak hours when webpage loads for several seconds this could be monitored in Mytop or Phpmyadmin: Copying to tmp table
, so I've increased tmp_table_size
and max_heap_table_size
Please give me some advice where could the bottleneck be, because I am lost. This is my first server in this configuration and maybe I could forget to tune something.
Nginx 1.2.1, php5-fpm
Debian 7.1 Wheezy
2x L5420 @ 2.50GHz
8GB RAM
Best Answer
Solved! Everything was in MySQL (problems with queries -> lack of indexes (20 times slower queries).
UPDATE:
Had to tune Read-Ahead in linux to increase throughput. From 256 (default) to 16384.
After this operation read speed increased from 40MBps to 260MBps and I've monitored in MRTG that out traffic increased almost twice. So requested traffic (before) could not be served by HDD and this was IO bottleneck, website was loading for seconds!