I have an EasyStore NAS running Ubuntu, and I want to replace the disks. I want to add the ZFS kernel modules and create a ZFS RAIDZ pool. Due to space constraints, I have to move the data to the new pool right away. Can I create the new pool on another computer and move it to the EasyStore after the files are moved? Will ZFS have problems recognizing the disks, or anything else?
Ubuntu – move a ZFS pool to another computer
Ubuntuzfs
Related Solutions
With 20 disks you have a lot of options. I'm assuming you already have drives for the OS, so the 20 disks would be dedicated data drives. In my Sun Fire x4540 (48 drives), I've allocated 20 drives in a mirrored setup and 24 in a striped raidz1 config (6 disks per raidz and 4 striped vdevs). Two disks are for the OS and the remainder are spares.
Which controller are you using? You may want to refer to: ZFS SAS/SATA controller recommendations
Don't use the hardware raid if you can. ZFS thrives when drives are presented as raw disks to the OS.
Your raidz1 performance increases with the number of stripes across raidz1 groups. With 20 disks, you could use 4 raidz1 groups consisting of 5 disks each, or 5 groups of 4 disks. Performance on the latter will be better. Your fault tolerance in that setup would be sustaining the failure of 1 disk per group (e.g., potentially 4 or 5 disks could fail under the right conditions).
The read speed from a raidz1 or raidz2 group is equivalent to the read speed of one disk. With the above setup, your theoretical max read speeds would be equivalent to that of 4 or 5 disks (for each vdev/group of raidz1 disks).
Going with the mirrored setup would maximize speed, but you will run into the bandwidth limitations of your controller at that point. You may not need that type of speed, so I'd suggest a combination of raidz1 and stripes. In that case, you could sustain one failed disk per mirrored pair (e.g. 10 disks could possibly fail if they're the right ones).
Either way, you should consider a hot-spare arrangement no matter which solution you go with. Perhaps 18 disks in a mirrored arrangement with 2 hot-spares or a 3-stripe 6-disk raidz1 with 2 hot-spares...
When I built my first ZFS setup, I used this note from Sun to help understand RAID level performance...
http://blogs.oracle.com/relling/entry/zfs_raid_recommendations_space_performance
Examples with 20 disks:
20-disk mirrored pairs.
pool: vol1
state: ONLINE
scrub: scrub completed after 3h16m with 0 errors on Fri Nov 26 09:45:54 2010
config:
NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c6t1d0 ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c8t1d0 ONLINE 0 0 0
c9t1d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c5t2d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c6t2d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c8t2d0 ONLINE 0 0 0
c9t2d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
c5t3d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c6t3d0 ONLINE 0 0 0
c7t3d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c8t3d0 ONLINE 0 0 0
c9t3d0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c5t4d0 ONLINE 0 0 0
20-disk striped raidz1 consisting of 4 stripes of 5-disk raidz1 vdevs.
pool: vol1
state: ONLINE
scrub: scrub completed after 14h38m with 0 errors on Fri Nov 26 21:07:53 2010
config:
NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c6t4d0 ONLINE 0 0 0
c7t4d0 ONLINE 0 0 0
c8t4d0 ONLINE 0 0 0
c9t4d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c6t5d0 ONLINE 0 0 0
c7t5d0 ONLINE 0 0 0
c8t5d0 ONLINE 0 0 0
c9t5d0 ONLINE 0 0 0
c4t6d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c6t6d0 ONLINE 0 0 0
c7t6d0 ONLINE 0 0 0
c8t6d0 ONLINE 0 0 0
c9t6d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c6t7d0 ONLINE 0 0 0
c7t7d0 ONLINE 0 0 0
c8t7d0 ONLINE 0 0 0
c9t7d0 ONLINE 0 0 0
c6t0d0 ONLINE 0 0 0
Edit: Or if you want two pools of storage, you could break your 20 disks into two groups:
10 disks in mirrored pairs (5 per controller).
AND
3 stripes of 3-disk raidz1 groups
AND
1 global spare...
That gives you both types of storage, good redundancy, a spare drive, and you can test the performance of each pool back-to-back.
To be able to increase storage space by replacing only a few of the disks, you should use mirrored vdevs, striped together (which amounts to RAID10 indeed).
In your case, with 4 drives that would mean working to something like this:
zpool
mirror
disk1
disk2
mirror
disk3
disk4
This would provide you with 2TB of storage (given all disks are 1TB) and good redundancy (0% of array failure after 1 disk crash, only 33% of array failure with 2 simultaneous disk crashes).
Now to get there I would buy those 2 new 1TB disks, and put those in the pool:
zpool create zpool mirror disk1 disk2
Then move your stuff of the DLINK to the newly created pool.
Once that is done, you can scavenge the DLINK disks and add them to the pool, to increase storage:
zpool add zpool mirror disk3 disk4
If you later want to increase storage even more, you can do that by adding more vdevs (preferably also mirrors) OR by replacing only 2 of the 4 disks. Replacing goes as follows:
zpool offline zpool disk3
# remove physical disk3 at this point
# insert new, bigger disk in place of disk3
zpool online zpool disk3
# wait for resilver
# after resilver, do the same with disk4
# your vdev is now bigger, increasing the size of the pool
Now, let's look at the other option. If you had made 1 raidz vdev like so:
zpool
raidz
disk1
disk2
disk3
disk4
You would have 3TB of storage, but, to increase that storage by just replacing disks (and not adding), you would have to replace ALL 4 disks (one by one ofcourse) to increase pool size! Also this configuration has 100% array failure if 2 disks crash simultaneously.
The raidz configuration would also by slower than the striped mirrors configuration. Since raidz is more computationally intensive, while the stripes + mirrors actually improve read and write performance. With 'normal' harddisks (non SSD) the striped mirrors will likely fill your gigabit connection for sequential reads and writes, because ZFS can combine the disks' bandwidth (remember 1Gb/s is only ~125 MegaBYTES/s, a standard 'normal' harddisk will give you around 90 Megabytes/s). I don't think the above raidz configuration will be able to do that on consumer hardware.
To conclude, the score for striped mirrors / RAID 10 with your amount of disks is:
+ max redundancy
+ maintenance
- available storage space
+ speed
The score for raidz is:
- max redundancy
- maintenance
+ available storage space
- speed
I would say striped mirrors win :)
A final tip: definitely read up more on the how-to and the why before starting! Maybe even simulate the whole procedure in a Virtual Machine. I'm thinking particularly on the step where you add the second mirror vdev! If you do it wrong you might get a different configuration that you had hoped for and ZFS is very unforgiving in those cases, since it doesn't allow you to remove vdevs from the pool or disks from raidz vdevs!! (removing disks form mirror vdevs is allowed however)
Also, be future proof and label and align your disks, so you don't get into trouble with Advanced Format drives! For more information on the intricacies of ZFS and 4K drives, I suggest you read this thread on the FreeBSD forum.
Related Topic
- Freebsd – zfs pool failing import with online disks and status
- ZFS – Is RAIDZ-1 really that bad
- Inplace migrating ZFS RAIDZ with 3 drives to 4 disks, when pool has more than 1/3 of free space
- Ubuntu – Multiple RAID types on a ZFS pool
- ZFS pool degraded on reboot
- Freebsd – How to recover the ZFS pool, which suddenly went offline and reports that it was “last used by another system”
Best Answer
No problems... You can create a pool and use the
zpool export
option on the system you create the pool on.Once the disks are attached to the final destination host, you can use the
zpool import
command to import the dataset.See: Migrating ZFS Storage Pools