Ubuntu + ZFS: How to migrate entire system to new disks

migrationreplicationUbuntuzfs

Here's my scenario:

I have Ubuntu with native ZFS installed on a server with 2 x 500 GB SATA disks. I installed it following this guide: https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem

So I have on disk 1 a 16 MB partition for /boot/grub and the rest of that disk and the entire second drive dedicated to ZFS in a mirroring zpool. Everything works fine.

The problem is that now I need to get rid of the 500 GB disks and replace them for 2 x 1.5 TB disks.

Is there any way that I can replicate everything (data, partition table, etc.) from my two 500 GB HDD to the two 1.GB TB HDD without having to re-install the system from the scratch?

I'm adding here the information requested by @jlliagre:

fdisk:

# fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf4bfe018

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63       32129       16033+  be  Solaris boot
/dev/sda2           32130   976773167   488370519    5  Extended
/dev/sda5           32193   976773167   488370487+  bf  Solaris

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I    /O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf4bfe018

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63   976773167   488386552+   5  Extended
/dev/sdb5             126   976773167   488386521   bf  Solaris

zpool status:

# zpool status
  pool: labpool
 state: ONLINE
 scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    labpool     ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sda5    ONLINE       0     0     0
        sdb5    ONLINE       0     0     0

errors: No known data errors

zpool list

# zpool list
NAME      SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
labpool   464G  70.7G   393G    15%  1.00x  ONLINE  -

zpool history

# zpool history
History for 'labpool':
2012-02-17.19:23:39 zpool create labpool mirror /dev/disk/by-id/ata-WDC_WD5000AAKX-001CA0_WD-WCAYUFF66324-part5 /dev/disk/by-id/ata-WDC_WD5000AAKX-001CA0_WD-WCAYUFJ06204-part5
2012-02-17.19:26:39 zfs create labpool/ROOT
2012-02-17.19:26:44 zfs create labpool/ROOT/ubuntu-1
2012-02-17.19:27:15 zfs set mountpoint=/ labpool/ROOT/ubuntu-1
2012-02-17.19:27:36 zpool set bootfs=labpool/ROOT/ubuntu-1 labpool
2012-02-17.19:28:03 zpool export labpool
2012-02-17.19:28:30 zpool import -d /dev/disk/by-id/ -R /mnt labpool
2012-02-17.20:48:20 zpool export labpool
2012-02-17.21:03:30 zpool import -f -N labpool
2012-02-17.21:07:35 zpool import -f -N labpool
2012-02-17.21:42:09 zpool import -f -N labpool
2012-02-17.21:51:39 zpool import -f -N labpool
2012-02-17.21:55:49 zpool import -f -N labpool
2012-02-17.21:58:10 zpool import -f -N labpool
2012-02-22.13:25:26 zpool import -f -N labpool
2012-02-22.13:40:15 zpool import -f -N labpool
2012-02-22.12:50:38 zpool import -f -N labpool

I've been thinking: what if I boot from a LiveCD, follow the installation guide up to step 4 (partitioning and creating the ZFS pool on my new pair of disks), then mount the new filesystem on /mnt/new and the old one in /mnt/old and rsync from old to new? Would that be possible? Or will it mess everything up?

Best Answer

This should work:

  • Create a similar partition layout on the new disks, ZFS isn't going to do it for you.

  • Copy the boot partition and reinstall the boot loader.

  • set the autoexpand property on your root pool zpool set autoexpand=on labpool

  • Replace one of the disks, eg zpool replace labpool sda5 sdc5 and wait for the resilvering to mirror all the pool datasets using zpool status

  • Replace the second disk zpool replace labpool sdb5 sdd5.

  • Remove the old disks