I have created a raid10 array using mdadm. Metadata version is 1.2. I want to convert this to a raid0 array, is it possible?
P.S. I am having terabytes of data, I want to avoid copying the data over.
mdadmraid10software-raid
I have created a raid10 array using mdadm. Metadata version is 1.2. I want to convert this to a raid0 array, is it possible?
P.S. I am having terabytes of data, I want to avoid copying the data over.
My understanding is that raid1 doubles the writes,but can speed up the reads a lot.
How large is your database and how fast can you make backups of it?
I also thought RAID0 (striping only) made it MORE likely you could have a problem,i.e. one volume is lost and its good bye data.
One of the unexpected I/O oddities of Ec2 EBS volumes, is the first time you write to your EBS disk, it takes like 2x longer, than any subsequent write. I do a disk fillup to 50% with just dd if=/dev/zero of=/newdisk/bigfile1 bs=1024m count=1024, then wipe the big files and afterwards writes take more normal speeds.
This is a good ec2 dev forum article on ec2 disk I/O specifics. https://forums.aws.amazon.com/thread.jspa?messageID=220860
and this says EBS disks are bad for raid10
Amazon EC2 Disk Performance | AF-Design Feb 27, 2009 ... If you have a high number of reads or writes, which you likely do .... Pingback: Amazon EC2 Disk Performance and Why RAID 10 is bad for EBS ... http://af-design.com/blog/2009/02/.../amazon-ec2-disk-performance/
First of all: to those, who still believes in "RAID0 has no hot spare". It could have a manual spare, done by human, who understand RAID levels and mdadm. mdadm is software RAID, so it could do a lot of interesting things.
Credits to Zoredache for the idea!
If the downtime is acceptable, you always can just make a block copy of disk with dd and reassemble the array, mdadm will do OK.
RAID0 -> RAID4 -> RAID0
So, if you don't remember RAID4, it is simple. It has a parity block, but unlike RAID5 it is not distributed across the array, but resides on ONE disk. That's the point, this is important and this is the reason RAID5 will not work.
What you'll need: two more disks of the same size, as the disk you would like to replace.
sudo mdadm -C /dev/md0 -l 0 -n 2 /dev/sd[bc]
md0 : active raid0 sdc[1] sdb[0]
2096128 blocks super 1.2 512k chunks
We've created raid0 array, it looks sweet.
sudo md5sum /dev/md0
b422ba644a3c83cdf28adfa94cb658f3 /dev/md0
This is our check point - if even one bit will differ in resulting /dev/md0
- we've failed.
sudo mdadm /dev/md0 --grow --level=4
md0 : active raid4 sdc[1] sdb[0]
2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [UU_]
So, we've grown our array to be RAID4. We haven't added the parity disk yet, so let's do it. The grow will be instant - there is nothing to recompute or recalculate.
sudo mdadm /dev/md0 -a /dev/sdd
md0 : active raid4 sdd[3] sdc[1] sdb[0]
2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [UU_]
[===>.................] recovery = 19.7% (207784/1048064) finish=0.2min speed=51946K/sec
We've added sdd
as parity disk. This is important to remember - the order of disks in the first row is not syncronized with the picture in second row! [UU_]
sdd
is displayed first, but in fact it is last one, and holds not the data, but the parity.
sudo mdadm /dev/md0 -f /dev/sdb
md0 : active raid4 sdd[3] sdc[1] sdb[0](F)
2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [_UU]
We've made our disk sdb faulty, to remove it in the next steps.
sudo mdadm --detail /dev/md0
State : clean, degraded
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 32 1 active sync /dev/sdc
3 8 48 2 active sync /dev/sdd
0 8 16 - faulty spare /dev/sdb
Details show us the removal of the first disk and here we can see the true order of the disks in the array. It's important to track the disk with parity, we should not leave it in the array when going back to RAID0.
sudo mdadm /dev/md0 -r /dev/sdb
md0 : active raid4 sdd[3] sdc[1]
2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [_UU]
sdb
is completely removed, could be taken away.
sudo mdadm /dev/md0 -a /dev/sde
md0 : active raid4 sde[4] sdd[3] sdc[1]
2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [_UU]
[==>..................] recovery = 14.8% (156648/1048064) finish=0.2min speed=52216K/sec
We have added the replacement for our sdb disk. And here we go: now the data of sdb is being recovered using parity. Sweeeeet.
md0 : active raid4 sde[4] sdd[3] sdc[1]
2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/3] [UUU]
Done. Right now we are completely safe - all data from sdb are recovered, and now we have to remove sdd (remember, it holds parity).
sudo mdadm /dev/md0 -f /dev/sdd
md0 : active raid4 sde[4] sdd[3](F) sdc[1]
2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [UU_]
Made sdd faulty.
sudo mdadm /dev/md0 -r /dev/sdd
md0 : active raid4 sde[4] sdc[1]
2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [UU_]
Removed sdd from our array. We are ready to become RAID0 again.
sudo mdadm /dev/md0 --grow --level=0 --backup-file=backup
md0 : active raid4 sde[4] sdc[1]
2096128 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [UU_]
[=>...................] reshape = 7.0% (73728/1048064) finish=1.5min speed=10532K/sec
Aaaaaaand bang!
md0 : active raid0 sde[4] sdc[1]
2096128 blocks super 1.2 512k chunks
Done. Let's look at md5 checksum.
sudo md5sum /dev/md0
b422ba644a3c83cdf28adfa94cb658f3 /dev/md0
Any more questions? So RAID0 could have a hot spare. It's called "user" ;)
Best Answer
Yes.
As of mdadm version 3.2.1, and running a "suitably recent kernel" (whatever that means, I'd guess at least 3.0), a reshape from RAID10 to RAID0 is possible. This means a pretty recent Linux distribution; the system that you're running on may need an upgrade, or you may need to temporarily boot to a live CD type of environment with newer tools to do the conversion.
To make the change, it'll be something along these lines:
And do keep in mind the caveats that have been mentioned. Running anything on RAID0 is incredibly risky; you will see a failure eventually.