Web Applications Networking – Calculate Web-Application Acceptable Use Bandwidth

networkingweb-applications

I need to measure a web application average bit rate consumption to see what is the recommended bandwidth needed for the end-user to connect and use the web application with no performance issues from the server.

My thinking is to use an end-user machine and make a stress test to the web application and load it with heavy web requests that a normal user can do in a day. Then, I can see the CPU/RAM utilization (with considering Network factor as well) on the server side at the time of the requests and make a decision whether the response from the server will serve with acceptable performance at the end-user.

I am still not sure how I can measure this because many factors will be in place. For example, I know the server will serve many users at any time and it will not be a fair calculation for all the user connected to get a fixed acceptable performance figure.

Any ideas?

Best Answer

In the past I have approached problems like this by minimizing latency either on the client side or on the server side. Knowing the size of the network pipe is one thing, but it usually something that you cannot modify. Usually, with a centralized server, it is the server that is the issue.

Although quite a bit is left to judgement, there are some approximations.

    Wide area network 100s of millisec
    Disk access 10 milli sec
    Ssd disk access .5 milli sec
    Local area network, .5 mill sec
    Local memory 100 nano sec
    Processor Cache  100 pico sec

So consider what basic steps you web app has to do and tally the up. See how close you are - either you have forgotten something ( most likely), or you need to modify your approximations.

For example, there was a web application that I was working on that most requests took a bit more than 70 ms to respond. I traced this to 7 database operations. A Proposal to increase performance was to cache more data locally (we were hitting a couple of static tables that could be cached locally), bringing the latency down to bit more than 50 ms. Furthermore, by going with SSDs, we could potentially bring it down a bit more 5 ms. Of, course lower latency means that the server can handle a higher peak load.

Related Topic