Drawbacks of a single drive split into partitions and partitions joined into a ZFS raidz1, vs. single drive ZFS with data duplication

data integrityraidzzfs

I'm looking for a minimal ZFS setup to achieve self-healing on a single device. Since I back up my data, drive failures are no concern, but bit-rot is. So far, the only suggestions I've been able dredge up pointed to the copies=2 option, but that cuts the usable capacity by 50%.

My question: What are the drawbacks (other than performance) to split the device into 8 partitions and joining them into a 7+1 RAID-Z1 array? That would mean a 12.5% reduction in usable space. While I'm at it, why not go 19+1 or 99+1 (or whatever the partition table allows)?

Best Answer

From the ZFS Best Practices Guide:

For production systems, use whole disks rather than slices for storage pools for the following reasons:

  • Allows ZFS to enable the disk's write cache for those disks that have write caches. If you are using a RAID array with a non-volatile write cache, then this is less of an issue and slices as vdevs should still gain the benefit of the array's write cache.
  • For JBOD attached storage, having an enabled disk cache, allows some synchronous writes to be issued as multiple disk writes followed by a single cache flush allowing the disk controller to optimize I/O scheduling. Separately, for systems that lacks proper support for SATA NCQ or SCSI TCQ, having an enabled write cache allows the host to issue single I/O operation asynchronously from physical I/O.
  • The recovery process of replacing a failed disk is more complex when disks contain both ZFS and UFS file systems on slices. ZFS pools (and underlying disks) that also contain UFS file systems on slices cannot be easily migrated to other systems by using zpool import and export features.
  • In general, maintaining slices increases administration time and cost. Lower your administration costs by simplifying your storage pool configuration model.

To sum it up, it is much much slower and more difficult to correctly handle, replace and grow.

Additionally, you still have to care about your pool layout. Your RAIDZ1 setup would still suffer from the RAID5 write hole problem while replacing the slice, and it will still suffer if you choose non-optimal amounts of slices for your RAIDZ level (also from the recommendations in the guide):

  • (N+P) with P = 1 (raidz), 2 (raidz2), or 3 (raidz3) and N equals 2, 4, or 6
  • The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups.
Related Topic