RAID6 degraded drive during rebuild

hardware-raidraidraid-controller

Background and context

I have a 3ware LSI RAID controller and a RAID6 setup over 16 physical disks. One of the HDDs died last night, and I replaced it with a brand new (identical) drive this morning.

Problem

The command

/c0/u0 show

gives me

Unit     UnitType  Status         %RCmpl  %V/I/M  Port  Stripe  Size(GB)
------------------------------------------------------------------------
u0       RAID-6    REBUILDING     7%(A)   -       -     256K    6519.12   
u0-0     DISK      OK             -       -       p0    -       465.651   
u0-1     DISK      OK             -       -       p1    -       465.651   
u0-2     DISK      OK             -       -       p2    -       465.651   
u0-3     DISK      OK             -       -       p13   -       465.651   
u0-4     DISK      OK             -       -       p4    -       465.651   
u0-5     DISK      OK             -       -       p5    -       465.651   
u0-6     DISK      OK             -       -       p6    -       465.651   
u0-7     DISK      OK             -       -       p7    -       465.651   
u0-8     DISK      OK             -       -       p8    -       465.651   
u0-9     DISK      OK             -       -       p9    -       465.651   
u0-10    DISK      OK             -       -       p10   -       465.651   
u0-11    DISK      OK             -       -       p11   -       465.651   
u0-12    DISK      OK             -       -       p12   -       465.651   
u0-13    DISK      OK             -       -       p3    -       465.651   
u0-14    DISK      OK             -       -       p14   -       465.651   
u0-15    DISK      DEGRADED       -       -       p15   -       465.651   
u0/v0    Volume    -              -       -       -     -       6519.12 

Which at first I was seriously concerned about (i.e. the new disk was bad too, or there was something wrong with the bay hardware), but the rebuild is progressing (I guess (A) means active) and show alarms doesn't give any errors;

c0   [Thu Aug 22 2013 23:14:32]  WARNING   Drive removed: port=15
c0   [Thu Aug 22 2013 23:14:32]  ERROR     Degraded unit: unit=0, port=15
c0   [Thu Aug 22 2013 23:14:32]  WARNING   Drive removed: port=15
c0   [Sun Aug 25 2013 08:53:27]  INFO      Drive inserted: port=15
c0   [Sun Aug 25 2013 08:54:33]  INFO      Rebuild started: unit=0

I wasn't managing the array the last time a disk failed, but last time the reports looked like this;

c0   [Thu Apr 11 2013 20:52:51]  WARNING   Drive removed: port=3
c0   [Thu Apr 11 2013 20:52:51]  ERROR     Degraded unit: unit=0, port=3
c0   [Thu Apr 11 2013 20:52:51]  WARNING   Drive removed: port=3
c0   [Fri Apr 12 2013 10:42:35]  INFO      Drive inserted: port=3
c0   [Fri Apr 12 2013 10:44:24]  INFO      Rebuild started: unit=0
c0   [Fri Apr 12 2013 15:10:21]  INFO      Rebuild completed: unit=0

So it seems like, from the alarms at least, what happened last time is happening again.

So – is it possible degraded in this context just means it's being rebuilt and so is currently out of action, or am I being brutally optimistic and do degraded errors just not show up in show alarms

UPDATE

As suggested – the disk rebuilt and everything seems A-OK!

Best Answer

Well, from the log it all is ok, or? Rebuild started, no error, rebuzils still in progress.

Related Topic