Linux – Replacing 2 disks (raid 1) to a larger pair under a 3ware controller in linux

3wareexpansionhardware-raidlinuxmigration

I have a 3ware 9650se with 2x 2TB disks in raid-1 topology.

I recently replaced the disks with 2 larger (3TB) ones, one by one. The whole migration went smoothly. The problem I have now is, I don't know what more I have to do to make the system aware of the increase in size of this drive.

Some info:

root@samothraki:~# tw_cli /c0 show all

/c0 Model = 9650SE-4LPML
/c0 Firmware Version = FE9X 4.10.00.024
/c0 Driver Version = 2.26.02.014
/c0 Bios Version = BE9X 4.08.00.004
/c0 Boot Loader Version = BL9X 3.08.00.001

....

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-1    OK             -       -       -       139.688   Ri     ON     
u1    RAID-1    OK             -       -       -       **1862.63**   Ri     ON     

VPort Status         Unit Size      Type  Phy Encl-Slot    Model
------------------------------------------------------------------------------
p0    OK             u0   139.73 GB SATA  0   -            WDC WD1500HLFS-01G6 
p1    OK             u0   139.73 GB SATA  1   -            WDC WD1500HLFS-01G6 
p2    OK             u1   **2.73 TB**   SATA  2   -            WDC WD30EFRX-68EUZN0
p3    OK             u1   **2.73 TB**   SATA  3   -            WDC WD30EFRX-68EUZN0

Note that the disks p2 & p3 are correctly identified as 3TB, but the raid1 array u1 is still seeing the 2TB array.

After following the guide on LSI 3ware 9650se 10.2 codeset (note: the codeset 9.5.3 user guide contains exactly the same procedure).

I triple sync my data and umount the raid array u1. Next I remove the raid array from command line using the command:

tw_cli /c0/u1 remove

and finally I rescan the controller to find the array again:

tw_cli /c0 rescan

unfortunately the new u1 array still identified the 2TB disk.

What could be wrong?

Some extra info. the u1 array corresponds to dev/sdb/ , which in turn corresponds to a physical volume of a larger LVM disk. Now that I replaced both the drives it appears that the partition table is empty. Yet the LVM disk works fine. Is that normal?!

root@samothraki:~# fdisk -l /dev/sdb 

Disk /dev/sdb: 2000.0 GB, 1999988850688 bytes
255 heads, 63 sectors/track, 243151 cylinders, total 3906228224 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

root@samothraki:~# 

Best Answer

You would need to update the u1 size before increasing the filesystem from within the OS. The latter will not "see" the new size until the 3ware controller notify it.

The unit capacity expansion in 3ware is called migration. I am certain it works for RAID5 and 6, didn't try it with RAID1. Here is an example of migration command to run:

# tw_cli /c0/u1 migrate type=raid1 disk=p2-p3

When this completes fdisk -l /dev/sdb should yield 3TB and vgdisplay <VG name> will list some empty space. From there you would increase the VG size, then the respective LV and finally the filesystem within the LV.

Edit: I think you are out of luck - see page 129 on the User Guide.
You could migrate your RAID1 to different array type.

Here is an alternative (it carries some risk, so make sure your backups are good):

  1. tw_cli /c0/u1 migrate type=single - this will break apart your u1 unit into two single drives;
  2. tw_cli /c0/u1 migrate type=raid1 disk=2-3 - this should migrate your single unit back to RAID1 with the correct size

Of course, there are alternative approaches to this, the one I listed above is in case you want your data online all the time.