Ceph hardware sizing calculator

cephhard driveperformancesizingssd

I would like to calculate the Hardware sizing for a ceph cluster. There are so few references to the sizing, that I try to get these details here in the community.

E.g. what shall i have depending on

  • spindle drives (7.2k, 10k, 15k)
  • SATA & SAS 6G SSDs
  • SAS 12G SSDs
  • NVMe PCIe v3
  • NVMe PCIe v4

Now the questions are

  • how many CPUs I shall have?
  • how many cores shall be available?
  • how many OSDs per drive type shall be planned?
  • how many RAM per OSD shall be planned?

Target: achieve best performance out of the node with the given drives. Means IOPS and bandwidth

A combined question to the drives are the limiting controllers.

How many drives per controllers shall be connected to get the best performance per node?
Is there a hardware controller recommendation for ceph?

is there maybe an calculator for calculating the sizing?

Best Answer

I can't find a link now as a source. But this is what I used in my cluster (10 OSD servers, 500 TB)

  • CPU: 1 core per OSD (hard drive). Frequency, higher as possible.
  • RAM: 1 Gb per 1TB of the OSD storage.
  • 1 OSD per hard drive.
  • Monitors doesn't need too much memory and CPU.
  • It is better to run monitors separately from OSD server, in case if server contains a lot of OSDs , but not mandatory.
  • If you plan to run a lot of OSDs (more than 2) per server, it is better not to use those servers to host virtual machines. OSDs requires quite a lot of memory and CPU power.
Related Topic