Linux – How to test server throughput

benchmarklinuxperformance

I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well.

Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so.

However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second

If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something.

Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

Best Answer

What is "it" in "it starts to crash"? The benchmarking tool or something on the server? And do you get any exception reports at all, directly from the failing component or output to log files?

If the problem is ab (the benchmark program) not liking that many active connections, try running more than one instance concurrently.

If it is the OS (or some interaction between ab and the OS) imposing a limitation, try multiple copies of ab but spread over different machines. Virtual machines on the same host may work if you don't have any spare physical machines to try.

In either case (server or client side), your hunch about limits to the number of open sockets and such may well be correct.

Related Topic