Nginx and PHP-FPM concurrent requests from same IP

nginxphp-fpm

I am having trouble serving concurrent time consuming requests coming from the same IP.

  • First request should take 6 minutes to respond (this is normal behaviour, my question is not about taking less time to respond)
  • Second request should take less than 100ms to respond.

What is happening is that the server is waiting for the first request to finish before sending the second request response.

My configuration is an AWS EC2 with 2 vCores (I believe that this is useful to handle concurrent computing).

The request goes through an Nginx server, to a php-fpm process. I thought that the problem was that I misconfigured PHP-FPM. However, after reading information, this is my php-fpm configuration:

$ cat www.conf | grep max_children     
;   static  - a fixed number (pm.max_children) of child processes;
;             pm.max_children      - the maximum number of children that can
;             pm.max_children           - the maximum number of children that
pm.max_children = 5

$ cat www.conf | grep start_servers    
;             pm.start_servers     - the number of children created on startup.
pm.start_servers = 2

$ cat www.conf | grep min_spare_servers
;             pm.min_spare_servers - the minimum number of children in 'idle'
; Default Value: min_spare_servers + (max_spare_servers - min_spare_servers) / 2
pm.min_spare_servers = 1

$ cat www.conf | grep max_requests     
;pm.max_requests = 500

$ cat www.conf | grep max_children     
;   static  - a fixed number (pm.max_children) of child processes;
;             pm.max_children      - the maximum number of children that can
;             pm.max_children           - the maximum number of children that
pm.max_children = 5

What am I missing? Where should I look to debug that behaviour

Don't hesitate to tell me in the comments if you need more information to help me, I'm just a junior…

Thank you all, and have a good week-end

Best Answer

Your number of running PHP workers is really low, so it might be that be that the first request uses all available workers to finish, and therefore the second request is blocked.

Try with these settings:

pm.max_children = 20
pm.start_servers = 5
pm.min_spare_servers = 3

The actual useful numbers depend on your actual traffic. Basically, max_children is the number of maximum simultaneously available workers that can serve individual requests, and you need to have a proper value there which matches your traffic.

Related Topic