The existing answers are quite outdated. Here in 2020, it's now possible to grow an mdadm
software RAID 10, simply by adding 2 or more same sized disks.
Creating the example RAID 10 array
For testing purposes, instead of physical drives, I created 6x 10GB LVM volumes, /dev/vg0/rtest1
to rtest6
- which mdadm had no complaints about.
# Using the thinpool lvthin on VG vg0 - I created 6x 10G volumes
lvcreate -T vg0/lvthin -V 10G -n rtest1
lvcreate -T vg0/lvthin -V 10G -n rtest2
...
Next, I created a RAID 10 mdadm array using the first 4 rtestX
volumes
mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/vg0/rtest[1-4]
Using mdadm -D
(equal to --detail
), we can see the array has 4x "drives", with a capacity of 20GB out of the 40GB of volumes, as is expected with RAID 10.
root@host ~ # mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Nov 20 09:02:39 2020
Raid Level : raid10
Array Size : 20953088 (19.98 GiB 21.46 GB)
Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Nov 20 09:04:24 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : resync
Name : someguy123:0 (local to host someguy123)
UUID : e49ab53b:c66321f0:9a4e272e:09dc25b1
Events : 23
Number Major Minor RaidDevice State
0 253 9 0 active sync set-A /dev/dm-9
1 253 10 1 active sync set-B /dev/dm-10
2 253 11 2 active sync set-A /dev/dm-11
3 253 12 3 active sync set-B /dev/dm-12
Expanding the RAID10 with 2 new equal sized volumes/disks
To grow the array, first you need to --add
the pair(s) of disks to the array, then use --grow --raid-devices=X
(where X is the new total number of disks in the RAID) to request that mdadm reshapes the RAID10 to use the 2 spare disks as part of the array.
mdadm --add /dev/md0 /dev/vg0/rtest5 /dev/vg0/rtest6
mdadm --grow /dev/md0 --raid-devices=6
Monitor the resync process
Here's the boring part - waiting for anywhere from minutes, to hours, to days or even weeks depending on how big your RAID is, until mdadm finishes reshaping around the new drives.
If we check mdadm -D
- we can see the RAID is currently reshaping.
mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Nov 20 09:02:39 2020
Raid Level : raid10
Array Size : 20953088 (19.98 GiB 21.46 GB)
Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Fri Nov 20 09:15:05 2020
State : clean, reshaping
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : resync
Reshape Status : 0% complete
Delta Devices : 2, (4->6)
Name : someguy123:0 (local to host someguy123)
UUID : e49ab53b:c66321f0:9a4e272e:09dc25b1
Events : 31
Number Major Minor RaidDevice State
0 253 9 0 active sync set-A /dev/dm-9
1 253 10 1 active sync set-B /dev/dm-10
2 253 11 2 active sync set-A /dev/dm-11
3 253 12 3 active sync set-B /dev/dm-12
5 253 14 4 active sync set-A /dev/dm-14
4 253 13 5 active sync set-B /dev/dm-13
Enjoy your larger RAID10 array!
Eventually once mdadm
finishes reshaping, we can now see that the array size is ~30G instead of ~20G, meaning that the reshaping was successful and relatively painless to do :)
mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Nov 20 09:02:39 2020
Raid Level : raid10
Array Size : 31429632 (29.97 GiB 32.18 GB)
Used Dev Size : 10476544 (9.99 GiB 10.73 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Fri Nov 20 09:25:01 2020
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : resync
Name : someguy123:0 (local to host someguy123)
UUID : e49ab53b:c66321f0:9a4e272e:09dc25b1
Events : 93
Number Major Minor RaidDevice State
0 253 9 0 active sync set-A /dev/dm-9
1 253 10 1 active sync set-B /dev/dm-10
2 253 11 2 active sync set-A /dev/dm-11
3 253 12 3 active sync set-B /dev/dm-12
5 253 14 4 active sync set-A /dev/dm-14
4 253 13 5 active sync set-B /dev/dm-13
According to smart - drive might be in failing status, try to rebuild again and if reallocated sector count grows - it's definitely bad.
ST1000DM003 is not supported drive - see compatibility report, also, according to my experience, these drives have some firmware/compatibility problems.
Globally, Adaptec 5 Series are very problematic from the point of view of compatibility, in some cases workaround is to connect drives directly, without backplane, in some cases they stop failing when drives are switched to 1.5 gbps (drive jumpers).
Use drives from compatibility list and don't forget to upgrade drive firmwares.
p.s. you've got write cache enabled on drives, but disabled on controller.
Best Answer
This depends on the specific implementation. HP and Dell controllers will let you grow most RAID levels by just adding disks. You can even convert between certain RAID levels. All online without downtime.
Some implementations of software RAID do this in one form or another, some do not.
All modern filesystems that I can think of support online growing of partitions, so that's not that big of a deal - though, in Windows, the drive to be extended and the free space must be contiguous in many cases.
So, in general, yes - it is technically possible. Are you able to do it? It depends on what specific RAID implementation you're using. Consult your manual or manpage.