Roughly how much of a performance hit will https take compared to http for the same page? Suppose I can handle 1000 requests/s for abc.php, how much will it decrease by when accessed through https? I know this might be dependent on hardware, config, OS etc etc but I am just looking for a general rule of thumb/estimate.
How much of a performance hit for https vs http for apache
apache-2.2performance
Related Solutions
I've seen servers with literally thousands of domains running without problems. Performance does not significantly degrade just by the number of sites you're running.
It's the overall number of requests and how much CPU (and other resources like bandwidth, disk IO, database calls etc.) is required per request that influence the server's responsiveness.
What I would do in this situation is run
strace -f -p <PID> -tt -T -s 500 -o trace.txt
on one of your Apache processes during the ab test until you capture one of the slow responses. Then have a look through trace.txt
.
The -tt
and -T
options give you timestamps of the start and duration of each system call to help identify the slow ones.
You might find a single slow system call such as open()
or stat()
or you might find a quick call with (possibly multiple) poll()
calls directly after it. If you find one that's operating on a file or network connection (quite likely) look backwards through the trace until you find that file or connection handle. The earlier calls on that same handle should give you an idea of what the poll()
was waiting for.
Good idea looking at the -c
option. Did you ensure that the Apache child you were tracing served at least one of the slow requests during that time? (I'm not even sure how you would do this apart from running strace
simultaneously on all children.)
Unfortunately, strace
doesn't give us the complete picture of what a running program is doing. It only tracks system calls. A lot can happen inside a program that doesn't require asking the kernel for anything. To figure out if this is happening, you can look at the timestamps of the start of each system call. If you see significant gaps, that's where the time is going. This isn't easily greppable and there are always small gaps between the system calls anyway.
Since you said the CPU usage stays low, it's probably not excessive things happening in between system calls but it's worth checking.
Looking more closely at the output from ab
:
The sudden jump in the response times (looks like there are no response times anywhere between 150ms and 3000ms) suggests that there is a specific timeout happening somewhere that gets triggered above around 256 simultaneous connections. A smoother degradation would be expected if you were running out of RAM or CPU cycles normal IO.
Secondly, the slow ab
response shows that the 3000ms were spent in the connect
phase. Nearly all of them took around 30ms but 5% took 3000ms. This suggests that the network is the problem.
Where are you running ab
from? Can you try it from the same network as the Apache machine?
For more data, try running tcpdump
at both ends of the connection (preferably with ntp
running at both ends so you can sync the two captures up.) and look for any tcp retransmissions. Wireshark is particularly good for analysing the dumps because it highlights tcp retransmissions in a different colour, making them easy to find.
It might also be worth looking at the logs of any network devices you have access to. I recently ran into a problem with one of our firewalls where it could handle the bandwidth in terms of kb/s but it couldn't handle the number of packets per second it was receiving. It topped out at 140,000 packets per second. Some quick maths on your ab
run leads me to believe you would have been seeing around 13,000 packets per second (ignoring the 5% of slow requests). Maybe this is the bottleneck you have reached. The fact that this happens around 256 might be purely a coincidence.
Best Answer
For a quick&dirty test (i.e. no optimization whatsoever!) I enabled the simple Ubuntu apache2 default website (which just says "It works!") with both http and https (self-signed certificate) on a local Ubuntu 9.04 VM and ran the apache benchmark "
ab
" with 10,000 requests (no concurrency). Client and server were on the same machine/VM:Results for http ("
ab -n 10000 http://ubuntu904/index.html
")Results for https ("
ab -n 10000 https://ubuntu904/index.html
"):If you take a closer look (e.g. with tcpdump or wireshark) at the tcp/ip communication of a single request you'll see that the http case requires 10 packets between client and server whereas https requires 16: Latency is much higher with https. (More about the importance of latency here)
Adding keep-alive (
ab
option-k
) to the test improves the situation because now all requests share the same connection i.e. the SSL overhead is lower - but https is still measurable slower:Results for http with keep-alive ("
ab -k -n 10000 http://ubuntu904/index.html
")Results for https with keep-alive ("
ab -k -n 10000 https://ubuntu904/index.html
"):Conclusion: