Php – Apache ab – testing with 1000 concurrency

apache-2.4benchmarkPHP

I am just trying to gather some information on server capacity and ran these apache ab test.

I tested with 1, 10, 100, 1000 concurrency over 1 minute. The result below is for 1000 concurrent users

I am using this benchmark script – https://github.com/odan/benchmark-php/blob/master/benchmark.php
Only thing I have changed is to replace MySQL Benchmark(1 million, Encode(…)) query with another simpler Benchmark(1 million, 1+1)

The script takes approx 0.52 seconds to complete so in one minute I would expect near about 115 completed requests in ideal conditions.

Result of ab -t 60 -n 5000 -kc 1000 mydomain-name/benchmark.php

Benchmarking mydomain-name (be patient)
Finished 111 requests

Server Software:        Apache/2.4.18
Document Path:          /benchmark.php
Document Length:        4210 bytes

Concurrency Level:      1000
Time taken for tests:   60.383 seconds
Complete requests:      111
Failed requests:        0
Keep-Alive requests:    0
Total transferred:      486180 bytes
HTML transferred:       467310 bytes
Requests per second:    1.84 [#/sec] (mean)
Time per request:       543987.550 [ms] (mean)
Time per request:       543.988 [ms] (mean, across all concurrent requests)
Transfer rate:          7.86 [Kbytes/sec] received

Connection Times (ms)
                  min  mean[+/-sd] median   max
Connect:       57  136  42.2    140     199
Processing:  2362 30681 16963.4  31249   60180
Waiting:     2362 30680 16963.4  31248   60180
Total:       2425 30817 17005.2  31389   60378

Percentage of the requests served within a certain time (ms)
  50%  31306
  66%  39785
  75%  45021
  80%  47629
  90%  54320
  95%  57756
  98%  58191
  99%  59632
 100%  60378 (longest request)

So it shows, with 1000 concurrency level, it completed 111 requests. My questions are:

1) It shows two Time per requests values. I know that script takes about 520 milliseconds to run when I check in the browser. So is the second value of 543.988 [ms] (mean, across all concurrent requests) the actual time per request when the test was running? First Time per request value of 543987.550 seems to be just: 543.988 x 1000 (concurrent users)? so does that mean it took approx 54 seconds to run the tests (where as "Time take for tests" says 60.383 seconds?)

All I am trying to do is to learn at what point it starts affecting server performance. Looking at above at 1000 concurrency level, it is still able to serve 1.84 requests per second at mean time per request of 543 milliseconds – which is what it to expect when there is no load?

If you are interested here is data for 10 and 100 concurrency level.

ab -t 60 -k -n 500 -c 10 mydomain-name/benchmark.php

Benchmarking mydomain-name (be patient)
Completed 100 requests
Finished 111 requests

Document Path:          /benchmark.php
Document Length:        4210 bytes

Concurrency Level:      10
Time taken for tests:   60.038 seconds
Complete requests:      111
Failed requests:        0
Keep-Alive requests:    0
Total transferred:      486180 bytes
HTML transferred:       467310 bytes
Requests per second:    1.85 [#/sec] (mean)
Time per request:       5408.824 [ms] (mean)
Time per request:       540.882 [ms] (mean, across all concurrent requests)
Transfer rate:          7.91 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       10   11   1.5     11      20
Processing:  1743 5170 647.3   5251    6893
Waiting:     1743 5169 647.3   5251    6892
Total:       1754 5181 647.5   5262    6906

Percentage of the requests served within a certain time (ms)
  50%   5260
  66%   5308
  75%   5370
  80%   5391
  90%   5441
  95%   5510
  98%   5540 
  99%   5967
 100%   6906 (longest request)

I repeated the same test with 100 concurrent users.

ab -t 60 -n 2000 -c 100 -k mydomain-name/benchmark.php

Benchmarking mydomain-name (be patient)
    Finished 114 requests

Server Software:        Apache/2.4.18
Server Port:            80

Document Path:          /benchmark.php
Document Length:        4210 bytes

Concurrency Level:      100
Time taken for tests:   60.683 seconds
Complete requests:      114
Failed requests:        0
Keep-Alive requests:    0
Total transferred:      499320 bytes
HTML transferred:       479940 bytes
Requests per second:    1.88 [#/sec] (mean)
Time per request:       53230.746 [ms] (mean)
Time per request:       532.307 [ms] (mean, across all concurrent requests)
Transfer rate:          8.04 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       10   21   4.0     21      28
Processing:  1932 30540 16362.9  30710   54039
Waiting:     1931 30539 16362.9  30710   54038
Total:       1953 30561 16362.6  30732   54066

Percentage of the requests served within a certain time (ms)
  50%  30732
  66%  41063
  75%  46327
  80%  48960
  90%  52212
  95%  52273
  98%  52288
  99%  52840
 100%  54066 (longest request)

Best Answer

What you're actually seeing is an application that is "serializing" request processing- meaning, only one request is being processed at a time, regardless of the number of requests that are being issued to it.

Note the consistency of the RPS across concurrency levels- always around 1.8- and note that 1.8 is about 1/service time of a single request (.53 secs).

The application is receiving 1 or 10 or 100 or 1000 requests, is picking one of them, queuing the rest (there are various ways this is done under the hood), processing the one it picked in .53 seconds, returning results, then picking another of the requests that are being queued, processing it, completing, returning results, etc for 60 seconds.

So- that's the "capacity" of this configuration- a little less than 2 requests per second- which is irrespective of incoming concurrent request rate.

Since this is 2018 and not 1993, you probably want to fix that. :) You should be able to do hundreds to- sometimes- thousands of requests/sec with PHP on a single reasonably sized and configured node.

Why exactly are requests being serialized? It's either something in the benchmark script- grabbing a lock or doing some other serialized action- or something in server configuration- number of web server workers, etc. If you can't figure it out, maybe create another question with the details and someone can probably help.

Related Topic