Redis Performance tuning

cachingperformanceredisweb-applications

We are running a web application and switched from memcached to redis (2.4) for caching. Now we are somewhat disappointed about redis performance. Redis is running on the same server and we use only very simple GET and SET operations. On some requests which make heavy use of cached values we have up to 300 GET requests to redis, but those requests take up to 150ms. We have about 200,000 active keys, and about 1,000 redis requests per second. There is no problem with disk io, ram or cpu. Because of our existing code we can't simply group redis requests together. Memcached has been about 4 times faster.
What we like about redis, is that we don't need any cache warming, and could use more advanced datastore features in future. We expected redis to perform similar to memcached. So perhaps we missed something in our configuration, which is basically the default configuration.

Do you know of any best practice for redis performance tuning?

Best Answer

First, you may want to read the Redis benchmark page. It provides a good summary of the main points to check to tune Redis.

Even supposing you do not use pipelining, 300 GETs in 150 ms is not that efficient. It means the average latency is 500 us. However, it actually depends on the size of your objects. Larger objects, more latency. On my very old 2 GHz AMD box, I can measure 150 us latencies for small objects (a few bytes).

To quickly check the average latency of the Redis instance, you can use:

$ redis-cli --latency

Be sure to use a recent Redis version (not 2.4) to get this option. Note: 2.4 is quite old now, use Redis 2.6 - compile your own Redis version if needed, it is really straightforward.

To quickly run a benchmark to study latency, you can launch:

$ redis-benchmark -q -n 10000 -c 1 -d average_size_of_your_objects_in_bytes

It runs with a unique connection and no pipelining, so the latency can be deduced from throughput. Try to compare the result of these benchmarks to the figures measured with your application.

There are a number of points you may want to check:

  • Which Redis client library do you use? With which development language? For some scripting languages, you need to install the hiredis module to get an efficient client.
  • Is your machine a VM? On which OS?
  • Are the connections to Redis persistent? (i.e. you are not supposed to connect/disconnect at each HTTP request of your app server).

Why is it better with memcached? Well, a single memcached instance is certainly more scalable, and may be more responsive than a single Redis instance, since it may run on multiple threads. Redis is fast, but single-threaded - the execution of all the commands is serialized. So when a command is on-going for a connection, all the other clients have to wait - a bad latency on a given command will also impacts all the pending commands. Generally, at low throughput, performance are comparable though.

At 1000 q/s (a low throughput by Redis or memcached standards), I would say it is more probable your problem is on client-side (i.e. choice of the client library, connection/disconnection, etc ...), than with Redis server itself.

Finally I should mention that if you generate a number of Redis queries per HTTP request, consider pipelining the commands you send to Redis as far as possible. It is really a key point to develop efficient Redis applications.

If your application servers are on the same box as Redis, you can also use unix domain sockets instead of the TCP loopback to connect to Redis. It slightly improves performance (up to 50% more throughput when pipelining is NOT used).