I have a RAID 5 array with 3 Disks (sdb sdc sdd
)
Over last weekend I was prompted that sdd
had failed. so replaced the drive, added it back into the array and let it rebuild (1.5TB)
It stopped at about 64% with an error, found out that sdb
is failing as well.
I imaged sdd
onto a new drive (ddrescue
) and sdb
onto a new drive (ddrescue
).
The copy of sdb
went well only like 3MB it couldn't copy. sdb
had alot more problems ( Please note I couldn't get a NEW drive so my images drives are actually physically larger then 1.5TB)
Trying to re-assemble the drive as it was before with:
mdadm -A /dev/md0 /dev/sdb /dev/sdc /dev/sdd
Gave an error:
mdadm: no recogniseable superblock on /dev/sdb
I also tried --force
, same result
i also did some reading about recovering the array by completely re-building it so i tried:
mdadm --verbose --create /dev/md0 --level=5 --raid-devices=3 /dev/sdc missing missing
(sdc
is the only drive that didn't fail, I was going to get the array started and add the other 2 drives in)
This resulted in:
mdadm: RUN_ARRAY failed: Input/output error
I am really in a bad spot hear. I have a lot data I need, about 1.2TB of stuff, this is a worst case scenario!
Best Answer
There is no backup.. This is the problem.
Storing important data (on ANY system, no matter how reliable) without a backup is indeed the problem!
Having no backup, and having experienced a failure mode for RAID 5 for which there is no proper recovery path, you are now what our British friends would refer to as "Right Royally Rogered" (actually they would probably use more colorful language).
You're down to two options at this point:
(1 and 2 are not mutually exclusive - In fact when you see the price for (2) you will probably do (1)...)
You can consider this a learning experience, and an expensive object lesson in the importance of regular backups and restore testing...