Rest – Storing session data on client or server side

androidperformancerest

I have an Android application that communicates with the server using REST. During the flow, the client(Android) usually send 3 separate requests to the server with different data.

Some of the data sent during the 1st request are also used during the 3rd request. Of course, we don't want to ask the user to enter the same data twice, so we plan to store the data in session. We create the session by using JSON Web Tokens.

So, it seems that we have 2 options:

  • Store the session data on client side (Android) and send the data again in 3rd request
  • Store the session data on server side to the temporary database and use data from there when needed (in the 3rd request)

From what I see, in the first approach we will send more data in the 3rd request, but on the server side we don't have to make a call to the database to store the data in the 1st request and to fetch it later in the 3rd one.

It seems that sending a little bit more data (in our case it's 10-20 character string) is less expensive then going to DB twice (first to store the data and later to fetch the data)

What is the best practice here? Does it depend on the amount of data? Are there more pros and cons ? Are there other, better solutions?

Also, it seems that storing the data on the server side conflicts with the Stateless principle of the REST

Best Answer

REST

If you have a RESTful server then all the data required for an interaction must be transferred to the server as part of the request. This does mean that your application must maintain the state for the conversation.

Tokens

That being said, it may make better sense for the first request to recieve tokens in its response, and then have that token passed back with the later request. The token should be encrypted/signed/expirable to minimise the room for bad actors to take advantage of the server. Never pass back sensitive data (even encrypted).

Architecting

If you are worried about speed, and you know that Requests 1, 2, and 3 happen in sequence (always), why not offer a single request to achieve all the above.

RAM Cache

A Database can easily have a RAM cache wrapped around it. Hot entries can be added, especially considering that they will be needed in two requests time.

State-full Server

If the network overhead, and database access are truly onerous, or the latency/jitter tolerance for processing requests are tight, a dedicated stateful server is the solution.

Use a Websocket, or a some other point to point communication structure, and keep a dedicated process memory resident that holds all relevant information for servicing this client.

Note that this is not very scalable, and is hard to make fault tolerant.