RAID-0 stripe size for Ceph OSD

cephraid

I have 3 servers that I will use for a new Ceph cluster. It's my first Ceph "playground"… Each server has 2x1TB and 6x2TB HDDs connected to two separate 4-channel SAS controllers, each with 1GB cache + BBU, so I plan to optimize those for throughput.

The first two disks will be used as a RAID-1 array for the OS and probably journals (still researching on that).

Drives 3 to 8 will be exposed as a separate RAID-0 devices in order to utilize the controller caches. I'm confused however about what will be the best tripe size and since I can't change that later without losing data I decided to ask here. Can somebody please explain? The default for the controllers (LSI 9271-4i) is 256k. I see some documents mentioning stripe width (e.g. here) defaulting to 64kb, but I'm still unsure about that. Interestingly there are no discussions on this topic. Maybe because many people run such setups in JBOD mode or because it just doesn't matter that much…

Since this will be my first cluster I will try to stick with the default settings as much as possible.

Best Answer

Year ago we had same decision to make. According this article, RAID 0 usage might increase performance in some cases. According CEPH hard drive and FS recommendations, it is suggested to disable hard drive disk cache. So I think the main points from those 2 articles together: it is better to use JBOD and disable write cache of hard drives.

Related Topic