Nginx Setting server_names_hash_max_size and server_names_hash_bucket_size

nginx

We are using Nginx as a reverse proxy to Apache in a service that gives anyone their own website. At account creation, the system creates a new nginx conf file for the domain with two entries, one for port 80, the other for 443. We are noticing that at every 30 or so domains, we get the error:

Restarting nginx: nginx: [emerg] could not build the server_names_hash, 
you should increase either server_names_hash_max_size: 256 
or server_names_hash_bucket_size: 64.

With around 200 domains and growing we have had to up the server_names_hash_max size to 4112 and are concerned this is not going to scale well. I'm looking to understand how these configurations work and what the optimal settings would be to ensure we can grow to thousands of domains using this method.

Also, at that hash size nginx is starting to take seconds to reload which is causing the system to be unavailable while it restarts.

Here are the overall settings (running on Ubuntu server 10.10 nginx/1.0.4):

user www-data;
worker_processes 4;
pid /var/run/nginx.pid;

events {
    worker_connections 4096;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 300;
    types_hash_max_size 2048;
    # server_tokens off;

    server_names_hash_bucket_size 64;
    # server_name_in_redirect off;
    # server_names_hash_max_size 2056;
    server_names_hash_max_size 4112;
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;
    gzip_disable "msie6";

    # gzip_vary on;
    # gzip_proxied any;
    # gzip_comp_level 6;
    # gzip_buffers 16 8k;
    # gzip_http_version 1.1;
    # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;

ssl_session_cache shared:SSL:10m;
ssl_ciphers ALL:!kEDH:-ADH:+HIGH:+MEDIUM:-LOW:+SSLv2:-EXP;
}

(Below the ciphers are couple main site configs and a catch all):

include /etc/user-nginx-confs/*;

server {
listen 80;
server_name .domain.com;
location / {
proxy_pass http://127.0.0.1:8011;
proxy_set_header host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-SystemUse-Header 111;
}
}

server {
listen 443 ssl;
server_name .suredone.com;
ssl_certificate /etc/apache2/sddbx/sdssl/suredone_chained.crt;
ssl_certificate_key /etc/apache2/sddbx/sdssl/suredone.key;
location / {
proxy_pass http://127.0.0.1:44311;
proxy_set_header host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-SystemUse-Header 111;
}
}

server {
listen 80 default_server;
listen 443 default_server ssl;
server_name _;
ssl_certificate /ssl/site_chained.crt;
ssl_certificate_key /ssl/site.key;
return 444;
}

(And a sample user conf file)

server {
listen 80;
server_name username.domain.com;
location / {
proxy_pass http://127.0.0.1:8011;
proxy_set_header host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-SystemUse-Header 1111;
}
}

server {
listen 443 ssl;
server_name username.domain.com;
ssl_certificate /ssl/site_chained.crt;
ssl_certificate_key /ssl/site.key;
location / {
proxy_pass http://127.0.0.1:44311;
proxy_set_header host $http_host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-SystemUse-Header 1111;
}
}

Any help and direction is greatly appreciated!!

Best Answer

The list of server names that nginx serves is stored in a hash table for fast lookup. As you increase the number of entries, you have to increase the size of the hash table and/or the number of hash buckets in the table.

Given the nature of your setup, I can't think of any way for you to easily reduce the number of server names you're storing in the table. I will suggest, though, that you not "restart" nginx, but rather simply have it reload its configuration. For instance:

service nginx reload