Attempt recovery of MDADM RAID, on images created with dd

mdadmsoftware-raid

I have a software RAID5 array with 4 + 1 disks in which 2 of the disks have failed. I'm hoping that with some percussive maintenance, I might get one of the failed disks working again long enough to do a block level copy to get the image off.

The plan would be to make images of 4 (or all 5) of the disks with dd, and then try rebuild the array on those images.

Is there something right off the bat that I've missed that would make this impossible? If not, how would I:

  1. Copy an image of each device to a file
  2. Mount these
  3. Reconfigure mdadm to use these images as the devices

Obviously, there are a lot of things that might have happened that make the data corrupt, but there are reasons to think the actual data might be intact across the 4 disks:

  1. The second disk failure might have been due to power loss
  2. The data I'm interested in recovering wasn't written after the first failure

Best Answer

In general, this approach would work. As long as you have working disks (and you do not expect them to fail) you also could use the device mapper to create overlay snapshots instead of fully copying the data off the disks (might be a good thing to have the copies as a backup, though).

Once you have the image files, you would need to create loopback block devices off them

losetup /dev/loopX /path/to/imagefileX

after this is done, you can assemble your array using the loopback block devices. If you can't recover the latest failed disk so timestamps on your redundancy disk image will be differing from the rest, the array would refuse to assemble. As long as you are really only interested in data written before the failures, take a look at the "Recovering a failed software RAID" section of the Kernel RAID documentation. It should help you through to get the array assembled with at least some of the data recoverable.

Related Topic