Linux – Recover ZFS data

disaster-recoverylinuxzfs

Context/Hardware:

  • HP microserver gen8

  • 1x1TB – standalone, 2x4TB Raid

  • 1x16GB iLO SDCARD with Debian + OpenMediaVault

Event:

  • SDCARD failure

  • restarted server and installed Ubuntu on 1TB drive

Consequences:

  • ZFS not accessible anymore

    root@fremen:~# sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
    NAME   FSTYPE       SIZE MOUNTPOINT                    LABEL     
    sda    zfs_member 931,5G                               
    └─sda1 ext4       931,5G /                             
    sdb    zfs_member   3,7T                               
    └─sdb1 zfs_member   3,7T                              
    sdc    zfs_member   3,7T                               
    └─sdc1 zfs_member   3,7T                               
    sdd                 5,7G
    
    root@fremen:~# zpool import -D -f 
    no pools available to import
    
    root@fremen:~# file -s /dev/sd?1
    /dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=9c46f52c-b529-4c39-a23b-819726f79146 (needs journal recovery) (extents) (64bit) (large files) (huge files)
    /dev/sdb1: data
    /dev/sdc1: data
    
  • Disks seem to be still on the ZFS pool but no data is accessible.

What to do in this situation? It is a friend's setup and I can connect remotely to the machine. I do not want to create a new pool as it will destroy data on ZFS volumes. As I cannot find pools on disks zdb cannot be used.

Best Answer

Michael Hampton comments were the solution for this.

It turns out that OMV in fact didn't use ZFS at all but just marked the drives as ZFS members.

I have dd-ed one of the drives and on the image I ran testdisk. It turned out that there was a 0x0700 partition on the disk. Wrote a new partition table with testdisk and mounted it in loop. It turned out to be an ext4 partition with corrupted journal. After fixing the errors I was able to salvage all the data. Hence I did the same on the physical disks, got the data back.

Related Topic