Drives not Blinking after RAID 10 Hard Drive Replacement

dell-percdell-poweredgehardware-raidraid

Last week I replaced a hard drive in a Dell R515 server that has 8 drives in a RAID 10. This server had two rows of hard drives; 4 on top and 4 on bottom and I replaced the 2nd drive on the top row.

In the past, I've always done this while the server is running, but this time I turned the server off before replacing the hard drive.

When I turned the server back on, I was expecting to see a lot of blinking-light-activity on the new hard drive and also on the hard drive that the new hard drive is to be mirroring.

However, this did not occur. So, I decided to just give it some time….

Today, I rebooted this server and upon doing so I noticed that it indicated that the virtual hard disk is degraded.

Also, normally, while the server is operating, all hard drive lights seem to blink at the same time (during file accesses). However, now, only the bottom row (4 of the 8 drives) blink during file access.

To me, it seems like I did the wrong thing, when I turned off the server before replacing the drive and (because I did this), the server didn't rebuild that new hard drive into the RAID 10.

However, it surprises me that the top row of drives are keeping a solid green light with no blinking while the bottom 4 drives are blinking during file accesses.

Did I cause all 4 of the top row of drives to become out of sync with their respective mirror-drives (on the bottom row)?

A few minutes ago, I pulled out the new hot-swap-hard-drive while the server was running, and then put it back in. I can tell by the light activity that the new hard drive is now being synced with the drive under it.

However, the other 3 hard drive on the top row continue to be solid-green-light, with no apparent activity (blinking). Will they start acting normal after the new hard drive is synced, or will I most likely have to pull each of them too before the server's virtual disk will get out of degraded status?

UPDATE: Pulling the drive and putting it back in while the server was running seems to have fixed the issue. Rebooting no longer warns of a degraded virtual drive. The bottom row of drive lights still blink way more frequently than the top row. Perhaps that makes sense during read access (there's no need to read from mirrors, while there is a need to write to them). Perhaps I recall incorrectly about all drives blinking together.

Best Answer

A word of wisdom I've heard from storage veterans: "Don't troubleshoot with the blinky lights"

The status LEDs can help supplement other data you have available, but they should never be your sole indicator of what's going on unless you have no other option.

+1 on Joe's comment - don't try to fly blind, get OMSA installed (find "OpenManage Server Administrator Managed Node" on http://downloads.dell.com/published/pages/poweredge-r515.html for your OS), and reference the user guide for info on viewing status of the virtual disk and hard drives. You can also export an event log to view what may have occurred leading up to the drive problem, and after power-up - this may give you some more insight into what actually went wrong if it's not already obvious from other data shown in OMSA.

Did I cause all 4 of the top row of drives to become out of sync with their respective mirror-drives (on the bottom row)?

Quite possibly. That wouldn't be considered "normal behavior", but it may be a side effect of not having followed the documented procedure for drive replacement (or a combination of that and old/buggy controller firmware). Stick with hot-swapping in the future and you'll be a lot better off.

Related Topic