Cassandra: do more storage size, needs more CPU and RAM

cassandra

I have gone through the recommended architecture of the Cassandra node configuration! according to which the recommended hardware infrastructure for the node is to have

RAM: 16-32 GB,
storage: 500GB – 1TB and
a 64 bit CPU with 8 cores

datastax documentation says

"Maximum recommended capacity for Cassandra 1.2 and later is 3 to 5TB per node. "

I have heavy write system, say 10K records per second, initial data storage requirement is 72TB, and if i go with 1TB per node, i will have to have almost 80 nodes (keeping in mind overheads).. The aim is to lower node number by adding more data storage capacity to each node.

my question is
1. according to documentation, 16-32 GB of RAM will work fine with the 500-1TB load of data. so when i have to add more disk space, 3-5TB per node, will i have to enhance RAM and CPU too?
2. is there any correlation between storage size and RAM + CPU

Best Answer

I think how well this will work will depend on your data set and your load. There is not a direct correlation between storage size and RAM + CPU, however, if you are expecting 3x as many reads and writes going from 1TB to 3TB, then you can expect that you will need to accommodate for that with more RAM and CPU as well, but you very likely won't need to increase your CPU and RAM 1:1 with your storage (i.e. if you go from 1 to 3TB of disk, you won't need 3x RAM to accommodate). In general, you will find that I/O is the bottleneck, so having fast disks (SSDs!) is the most important.

I've ran nodes with 3TB of data and it worked without too much issue. There was a lot of tuning that needed to be done, so unless you have someone on team who has a lot of experience tuning Cassandra I would not recommend it unless this is a hard requirement. Where you have to be careful is with RAM and how much heap you will assign to the Cassandra jvm process. The maximum recommended heap for Cassandra is 8GB as garbage collection becomes more disruptive with larger heaps (unless you go with Azul Zing), and less frequent full GCs can lead to fragmentation which impacts performance. In general, it is not a good idea to run java applications with larger than 8GB of heap if you can avoid it.

In newer versions of Cassandra, you can move a lot off of heap and into native memory. Since 1.2, bloom filters and compression metadata have been moved off heap and into native memory. In 2.1 you can now allocate memtables off heap, this may help you deal with a larger data set. So now you can benefit more from having more RAM while staying at a reasonable (8GB) heap.

It is my recommendation to always lean more towards the side of having smaller nodes. These recommendations exist for a reason, and I think it's mostly because Cassandra is more proven being used in this way. Cassandra works great on cloud providers and with commodity hardware, you may even find it cheaper to have more smaller nodes than less bigger ones. Where it can become costly is in operations, but if you use good configuration management tools like puppet or chef, it becomes less costly. This also becomes harder to do with dedicated hardware set ups.

I would recommend not taking anyone's word for it though, and to find test out with different configurations in EC2 or another cloud provider and see what works best for your application. Your load profile and data set is really going to be the determining factor as to whether or not this will work. I can't stress it enough, do a lot of testing with different configurations! Once you've decided on something, it becomes an effort (but not impossible) to switch off. As someone who has gone through 3 different cluster configurations for 1 application, I cannot stress this enough :). To help test this, the new stress tool included with Cassandra 2.1 makes it really easy to generate a load scenario that is representational of what your application will do. Cassandra is very tunable and has a lot of good metrics for measuring performance, so using the stress tool also gives you an opportunity to try different options and learn more about managing Cassandra instances (tweaking memtable, compaction and other settings to get a feel). One or two weeks of testing will save you months of hardship!

Related Topic