Server load handling : One single batch request vs multiple requests

batch-processinghigh-loadnetworkingprocessrequests

I was talking to a developer about different apis for create, update and delete and he was saying that it was better to have a single batch request to server to do operations like create, update and delete than to have multiple requests for each operation because in case of multiple continuous request, the server will have to handle these many requests and it will increase the load on the server. I also assumed that instead of batching updates to DB it will have to make multiple updates.

Say I have a spreadsheet kind of application. In each row, I have multiple cells which I can edit. One way to persist these changes is to send an update request to server on each cell update. Another option is to debounce all the updates for say 10 seconds.

I can also create 10 rows, update a row and delete 2 other rows. I am trying to understand the differences and tradeoffs of these two types of request

I have been trying to search and understand the performance effects between these two approaches and have been having trouble to find such. Will these many requests have significant effect on performance? Can someone please explain this?

Best Answer

Sure...

Imagine that we are talking about an HTTP API ok?

The total payload for every request will be all the resources used to conclude this specific request only, so, we can think in one request payload for insert only ONE row into the database by the following flow, just for exemplification ok?

Step 1: Request start (CURL or any similar HTTP Clients)

Step 2: Server receives the request and starts a new thread (depending on webserver software)

Step 3: Weberver do the HTTP handshake, and starts a new thread for Script Interpreter (PHP .NET or Whatever)

Step 4: Script Interpreter, starts the connection to the Database (can be again throug network or local socket if the database is in the same server)

Step 5: Forward the single insert SQL command data to the SGDB process

Step 6: Executes the statemant and returns execution status (if in a select command the query data result will be transferred here too) to the Script Interpreter (again, PHP .NET or Whatever)

Step 7: Script Interpreter returns data to the Webserver Software (Apache, Nigins or Whatever)

Step 8: Webserver Software acess again the HTTP connection and puts the result data to the client for transmission throug network again...

Step 9: Finnaly ends the thread cicle, and frees up the system resources!

So now think about repeating this flow a thousand times throug only a few seconds, the need for server resources used will be high because even if we are inserting a single small row to database in every request, there are much more things happening on the server then we can see, that is just simplified example, in other scenarios the real paylod can have much more steps...

If we talk abaout a batch insert operation, for a thousand rows, all these steps are repeated, but only a single time, with the only diference that the SQL insert data will have now more size, resulting in more resources used in useful processing or IO task...

Obviously, every scenario has its own properties, and this logic need to be validated for each case...

Related Topic