You're abusing Nginx's worker_threads. There is absolutely no need to run that many workers. You should run as many workers as you have CPUs and call it a day. If you're running gunicorn on the same server, you should probably limit nginx workers to two. Otherwise, you're just going to thrash the CPUs with all the context switching required to manage all of those processes.
After few days of intense trial and errors, I'm glad to be able to say that I've understood where the bottleneck was, and I'll post it here so that other people can benefit from my findings.
The problem lies in the pub/sub connections that I was using with socket.io, and in particular in the RedisStore used by socket.io to handle inter-process communication of socket instances.
After realizing that I could implement easily my own version of pub/sub using redis, I decided to give it a try, and removed the redisStore from socket.io, leaving it with the default memory store(I don't need to broadcast to all connected clients but only between 2 different users connected possibly on different processes)
Initially I declared only 2 global redis connections x process for handling the pub/sub on every connected client, and the application was using less recources but I was still being affected by a constant CPU usage growth, so not much had changed. But then I decided to try to create 2 new connections to redis for each client to handle their pub/sub only on their sessions, then close the connections once the user disconnected. Then after one day of usage in production, the cpu's were still at 0-5%...bingo! no process restarts, no bugs, with the performance I was expecting to have. Now I can say that node.js rocks and am happy to have choosen it for building this app.
Fortunately redis has been designed to handle many concurrent connections(differently by mongo) and by default it's set at 10k, that leaves room for around 5k concurrent users, on a single redis instance, which is enough for the moment for me, but I've read that it can be pushed up to 64k concurrent connections, so this architecture should be solid enough I believe.
At this point I was thinking to implement some sort of connection pools to redis, to optimize it a little further, but am not sure if that won't cause again the pub/sub events to build up on the connections, unless each of them is destroyied and recreated each time, to clean them.
Anyway, thanks for your answers, and I'll be curious to know what you think, and if you have any other suggestion.
Cheers.
Best Answer
The redis database resides entirely in memory. The .rdb files are dumps to disk, for backup or persistence. It should be safe to delete them assuming you're sure you don't need the contents.