RAID Controller Throughput Limit – Understanding Bandwidth Constraints

bandwidthraid

I am a bit confused about the maximal theoretical throughput of my HP P410 Controller.

When using SATA drives, they are limited to 3Gb/s per physical link (about 300mb/s real speed). Does it mean that the theoretical limit of a full 8 disks array is the PCIE 8x Gen2 bandwidth (4gb/s real speed) or is there some kind of controller maximal bandwidth ?

My goal is to know how it will perform if I put a RAID 0 of 4 SSD on it. Because linking a single modern SSD on a 3Gb/s SATA will be a non sense, but if with 4 drive I get 1.2Gb/s of real throughput, it may avoid me to spend money on a SAS 12Gb SATA 6Gb more recent controller. My goal is to make best usage of a 10Gb NIC.

Thank a lot for help.

Best Answer

Benchmark and test, everything before that is just theoretical.


The PCIe interface is the obvious limiting factor, unless the spec sheet says otherwise. Specs say 2 GB/s although if it is PCIe 2 x8 theoretical is 4 GB/s.

300 MB/s is SATA 2 class performance which seems to be a limitation of that controller.

Use 4 SATA drives in RAID 10.

  • Performance will not be the theoretical 1.2 GB/s, but may be enough to practically saturate 10Gb Ethernet. Especially if there are caches in front of those reads.
  • RAID 0 means data loss on any disk failure, which usually isn't worth it when the controller will read from all the disks anyways.

Or, abandon the controller and get a NVMe storage instead. If you don't need capacity or redundancy you will only need one.