Freebsd – Convert raidz1 to raidz2 via a temporary drive

freebsdraidzrsynctruenaszfs

I set up a new NAS server a few months ago and set it as RaidZ. I'm re-thinking that decision and would like to grab an extra drive and move to RaidZ2. Before everyone replies that this isn't possible with ZFS I'd like to point out that I know it's not directly possible. I've been searching docs for days but there doesn't seem to be a set of instructions for how to do this manually. I believe I'll need to:

  1. Move the entire pool onto a drive big enough to hold the used space
  2. tear down the existing pool.
  3. build a new pool using the extra drive in raidz2
  4. Move the data back into the pool.

Doesn't seem too tricky but I'm not famliar enough with ZFS or FreeBSD to trust that I successfully achieved step 1. I'm not entirely sure about step 4 either but that can be retried, while messing up the first step would not be fun. My reading suggests that rsync is the way to go to effectively clone a drive but I want to be sure that what I copy is exactly the same as the source, including system/hidden files, symlinks, jails, etc, etc. The OS is running off mirrored flash drives which won't be changing and ideally they wouldn't even notice anything had changed at next boot (e.g. all jails start up as before). Does copying/cloning the pool/drives also move partition information? Will FreeNAS recreate these or will I need to manipulate them manually.

I'm running FreeNAS 9.3 and have 3x3TB HDD in a single raidz pool. I would like to end up with 4x3TB in raidz2. The current pool is using 1.3TB of space and I have an external 2TB drive for temp storage. It's not mission critical if data is lost.

Update

The actual question: How do I copy the pool to the temp drive while ensuring it is a complete enough copy to push back onto a recreated zpool. Also how do I push it back if it's not simply the reverse of the original copy.

Best Answer

Here is how I would approach it, purely using FreeNAS GUI:

  • Set up periodic snapshot task for a recursive snapshot from your root dataset (e.g. volume x)
  • Use temp drive to create as a simple 1-disk stripe volume, say, volume y.
  • Establish replication to that drive, x to y.
  • Wait before replication is done.
  • Set up periodic snapshot task for a recursive snapshot of y.
  • Destroy and re-create x.
  • Set up replication back from y to x.
  • Wait before replication is done.
  • Destroy y.

Disclaimer: I haven't tested this to make sure all the jails will still work, so you may want to get a confirmation from somewhere else or just be prepared to recreate jails (or carefully examine y after first replication is done). This worked smoothly for regular datasets with all historical snapshot information.