RAID1 rebuild fails due to disk errors

dell-percraidraid1smart

Quick info: Dell R410 with 2x500GB drives in RAID1 on H700 Adapter

Recently one of drives in RAID1 array on server failed, lets call it Drive 0. RAID controller marked it as fault and put it offline. I replaced faulty disk with new one (same series and manufacturer, just bigger) and configured new disk as hot spare.

Rebuild from Drive1 started immediately and after 1.5 hour I got message that Drive 1 failed. Server was unresponsive (kernel panic) and required reboot. Given that half hour before this error rebuild was at about 40%, I estimated that new drive is not in sync yet and tried to reboot just with Drive 1.

RAID controller complained a bit about missing RAID arrays, but it found foreign RAID array on Drive 1 and I imported it. Server booted and it runs (from degraded RAID).

Here is SMART data for disks.
Drive 0 (the one that failed first)

ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  1 Raw_Read_Error_Rate     POSR-K   200   200   051    -    1
  3 Spin_Up_Time            POS--K   142   142   021    -    3866
  4 Start_Stop_Count        -O--CK   100   100   000    -    12
  5 Reallocated_Sector_Ct   PO--CK   200   200   140    -    0
  7 Seek_Error_Rate         -OSR-K   200   200   000    -    0
  9 Power_On_Hours          -O--CK   086   086   000    -    10432
 10 Spin_Retry_Count        -O--CK   100   253   000    -    0
 11 Calibration_Retry_Count -O--CK   100   253   000    -    0
 12 Power_Cycle_Count       -O--CK   100   100   000    -    11
192 Power-Off_Retract_Count -O--CK   200   200   000    -    10
193 Load_Cycle_Count        -O--CK   200   200   000    -    1
194 Temperature_Celsius     -O---K   112   106   000    -    31
196 Reallocated_Event_Count -O--CK   200   200   000    -    0
197 Current_Pending_Sector  -O--CK   200   200   000    -    0
198 Offline_Uncorrectable   ----CK   200   200   000    -    0
199 UDMA_CRC_Error_Count    -O--CK   200   200   000    -    0
200 Multi_Zone_Error_Rate   ---R--   200   198   000    -    3

And Drive 1 (the drive which was reported healthy from controller until rebuild was attempted)

ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  1 Raw_Read_Error_Rate     POSR-K   200   200   051    -    35
  3 Spin_Up_Time            POS--K   143   143   021    -    3841
  4 Start_Stop_Count        -O--CK   100   100   000    -    12
  5 Reallocated_Sector_Ct   PO--CK   200   200   140    -    0
  7 Seek_Error_Rate         -OSR-K   200   200   000    -    0
  9 Power_On_Hours          -O--CK   086   086   000    -    10455
 10 Spin_Retry_Count        -O--CK   100   253   000    -    0
 11 Calibration_Retry_Count -O--CK   100   253   000    -    0
 12 Power_Cycle_Count       -O--CK   100   100   000    -    11
192 Power-Off_Retract_Count -O--CK   200   200   000    -    10
193 Load_Cycle_Count        -O--CK   200   200   000    -    1
194 Temperature_Celsius     -O---K   114   105   000    -    29
196 Reallocated_Event_Count -O--CK   200   200   000    -    0
197 Current_Pending_Sector  -O--CK   200   200   000    -    3
198 Offline_Uncorrectable   ----CK   100   253   000    -    0
199 UDMA_CRC_Error_Count    -O--CK   200   200   000    -    0
200 Multi_Zone_Error_Rate   ---R--   100   253   000    -    0

In extended error logs from SMART I found:

Drive 0 has only one error

Error 1 [0] occurred at disk power-on lifetime: 10282 hours (428 days + 10 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  10 -- 51 00 18 00 00 00 6a 24 20 40 00  Error: IDNF at LBA = 0x006a2420 = 6956064

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  61 00 60 00 f8 00 00 00 6a 24 20 40 00 17d+20:25:18.105  WRITE FPDMA QUEUED
  61 00 18 00 60 00 00 00 6a 24 00 40 00 17d+20:25:18.105  WRITE FPDMA QUEUED
  61 00 80 00 58 00 00 00 6a 23 80 40 00 17d+20:25:18.105  WRITE FPDMA QUEUED
  61 00 68 00 50 00 00 00 6a 23 18 40 00 17d+20:25:18.105  WRITE FPDMA QUEUED
  61 00 10 00 10 00 00 00 6a 23 00 40 00 17d+20:25:18.104  WRITE FPDMA QUEUED

But Drive 1 has 883 errors. I see only few last ones and all errors I can see look like this:

Error 883 [18] occurred at disk power-on lifetime: 10454 hours (435 days + 14 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  01 -- 51 00 80 00 00 39 97 19 c2 40 00  Error: AMNF at LBA = 0x399719c2 = 966203842

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 80 00 00 00 00 39 97 19 80 40 00  1d+00:25:57.802  READ FPDMA QUEUED
  2f 00 00 00 01 00 00 00 00 00 10 40 00  1d+00:25:57.779  READ LOG EXT
  60 00 80 00 00 00 00 39 97 19 80 40 00  1d+00:25:55.704  READ FPDMA QUEUED
  2f 00 00 00 01 00 00 00 00 00 10 40 00  1d+00:25:55.681  READ LOG EXT
  60 00 80 00 00 00 00 39 97 19 80 40 00  1d+00:25:53.606  READ FPDMA QUEUED

Given those errors, is there any way I can rebuild RAID back, or should I make backup, shutdown server, replace disks with new ones and restore it? What about if I dd faulty disk to new one from linux running on USB/CD?

Also, if anyone have more experiences, what could be causes for those errors? Crappy controller or disks? Disks are about 1 year old, but it is pretty unbelievable to me that both would die within so short timespan.

Best Answer

Actually, if the disks were both from the same batch from the manufacturer, it's not all that surprising that they'd fail around the same time.

They've had the same manufacturing process, environment, and usage patterns. That's why I usually try to order identical model drives from different vendors.

My preferred course of action here is to contact the manufacturer, replace with better disks, restore from backup.

Nothing wrong with DD'ing either, but I'm usually in need of getting the service up ASAP.

Back in the day of the IBM Deskstars fiasco, I had an entire set of 8 disks go bad all within 6 weeks after 4 years of usage. I barely got out of that with my data intact.

Related Topic