Differences between HW RAID and ZFS

dell-percdell-poweredgehardware-raidzfs

Background

I'm installing Proxmox Virtual Environment on a Dell PowerEdge R730 with a Dell PowerEdge RAID Controller (PERC) H730 Mini Hardware RAID controller and eight 3TB 7.2k 3.5" SAS HDDs. I was contemplating using the PERC H730 to configure six of the physical disks as a RAID10 virtual disk with two physical disks reserved as hot standby drives. However, there seems to be quite a bit of confusion between ZFS and HW RAID, and my research has brought me confusion rather than clarity.

Questions

  • What are the advantages and disadvantages of HW RAID versus ZFS?
  • What are the differences between HW RAID and ZFS?
  • Are HW RAID and ZFS complementary technologies or incompatible with each other?
  • Since Proxmox VE is a Debian-based Linux distribution, does it make more sense to use the H730 for RAID10 with LVM versus setting the H730 in HBA mode and using ZFS?

If these should be separate ServerFault questions, please let me know.

Similar ServerFault Questions

I found the following similar ServerFault questions, but these don't seem to directly address the above questions. Although, I fully admit that I'm not a full-time sysadmin, so maybe they address my questions, and I'm simply out of my depth.

Additional Research

Best Answer

Hardware RAID vs ZFS doesn't make a lot of difference from a raw throughput perspective -- either system needs to distribute data across multiple disks, and that requires running a few bit shifting operations on cached data, and scheduling writes to underlying disks. Which processor you use for that hardly matters, and synthetic workloads like running dd can't tell you much here.

The differences are in features:

Hardware RAID is usually just a block layer, perhaps with some volume management on top, while ZFS also includes a file system layer (i.e. there is no separation of concerns in ZFS). This allows ZFS to offer compression and deduplication, while that would be hard to get right on a block layer, but for use cases where you just want a set of simple 1:1 mappings, that additional complexity will still be there.

On the other hand, hardware RAID can offer battery backed write caches that are (almost) transparent to the operating system, so it can easily compensate for the overhead of a journaling file system, and data needs to be transferred out of the CPU only once, before adding redundancy information.

Both have their use cases, and in some places, it even makes sense to combine them, e.g. with a hardware RAID controller that offers a battery backed cache, but the controller is set to JBOD mode and only re-exports the constituent disks to the operating system, which then puts ZFS on top.

In general, ZFS alone is good for "prosumer" setups, where you don't want to spend money on hardware, but still want to achieve sensible fault tolerance and some compression, and where random-access performance isn't your primary concern.

ZFS on top of JBOD is great for container and VPS hosting -- the deduplication keeps the footprint of each container small, even if they upgrade installed programs, as two containers that have installed the same upgrade get merged back into one copy of the data (which is then again kept in a redundant way).

Hardware RAID alone is good for setups where you want to add fault tolerance and a bit of caching on the outside of an existing stack -- one of the advantages of battery backed write caches is that they are maintained outside of OS control, so the controller can acknowledge a transfer as completed as soon as the data has reached the caches, and if a write is superseded later, it can be skipped, and head movements can be scheduled system-wide ignoring dependencies.

The way journaling file systems work, they will first submit a journal entry, then as soon as that is acknowledged, submit the data and after that is acknowledged, another journal entry marking the first as complete. That is a lot of head movement, especially when the disks are shared between multiple VMs that each have their own independent journaling file system, and in a busy system, the caches allow you to skip about half of the writes, but from the point of view of the inner system, the journal still behaves normally and dependent writes are performed in order.

The aspect of safely reordering dependent writes for more optimal head movements is why you want a hardware RAID at the bottom. ZFS generates dependent writes itself, so it can profit from hardware RAID too, but these are the performance bottleneck only in a limited set of use cases, mostly multi-tenant setups with little coordination between applications.

With SSDs, reordering is a lot less important, obviously, so the motivation to use hardware RAID there is mostly bulk performance -- if you've hit the point where memory and I/O interface speed on the mainboard are relevant factors, then offloading the checksum generation and transferring only a single copy one way vs multiple transfers from and to RAM (that need to be synchronized with all the other controllers in the same coherency domain) is definitely worth it. Hitting that point is a big "if" -- I haven't managed so far.