If you're familiar with traditional RAID, basically you're comparing RAID6 (double parity) to RAID 50 (striped raid5 sets). The 8 disk RAID-z2 setup is probably the better bet as then any two of the 8 disks to fail without losing all your data, while with a striped pair of 4disk raidz1 sets can only tolerate one disk failure per set (failure of two disks in the same set and you loose everything).
The 2x4disk RAIDz1 sets may offer increased IOPS in some situations as smaller reads/writes might be serviced by one half or the other, whereas every IO hits every disk in the 8disk raidz2 setup, but if performance is your primary concern you should definitely think about mirroring your disks instead (ala RAID 10).
After lots and lots more Googling on this specific error message I was getting:
root@kyou:/home/matt# zpool import -f storage
cannot import 'storage': one or more devices are already in use
(Included here for posterity and search indexes) I found this:
https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/VVEwd1VFDmc
It was using the same partitions and was adding them to mdraid during any boot before ZFS was loaded.
I remembered seeing some mdadm lines in dmesg
and sure enough:
root@kyou:/home/matt# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md126 : active raid5 sdd[2] sdb[0] sde[1]
1953524992 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
These drives were, once upon a time, part of a software raid5 array. For some reason, during the upgrade, it decided to rescan the drives, and find that the drives were once part of an md array, and decided to recreate it. This was verified with:
root@kyou:/storage# mdadm --examine /dev/sd[a-z]
Those three drives showed a bunch of information. For now, stopping the array:
root@kyou:/home/matt# mdadm --stop /dev/md126
mdadm: stopped /dev/md126
And re-running import:
root@kyou:/home/matt# zpool import -f storage
has brought the array back online.
Now I make a snapshot of that pool for backup, and run mdadm --zero-superblock
on them.
Best Answer
I'm surprised you have such a large setup. Did you build this array? This is potentially a bad arrangement for performance due to the pool design.
Either way, the
zpool
man page explains this.zfs list
will show your usable space. Thezpool
list shows parity space as storage space.