The market for RAID controllers is fairly much consolidated these days. Three broad brush heuristics can be applied:
Price
Take a look at the pricing for genuine RAID cards from Areca, 3Ware, Adaptec and LSI. Anything that is much, much cheaper than these controllers is a 'fake RAID'. Remember, if it's too good to be true it probably isn't.
Manufacturer
There are a fairly limited number of manufacturers these days who actually make true hardware RAID controllers. Chances are that something not made by one of the main manufacturers of such kit is a 'fake RAID'. The main outfits that make RAID controllers are: Adaptec, LSI, Areca, Intel and Highpoint (possibly one or two others that I can't recall off the top of my head).
Specifications
The main outfits that produce RAID cards/controllers will also document the specifications in some detail on their web sites. If you can't find a detailed specification for the card get something you can get such a spec for. Note that not all cards produced by these outfits are necessarily RAID controllers, but the specs on the web site should make this clear.
Batteries
Thanks to sh-beta for pointing this out: Pretty much any hardware RAID controller worth buying will also have the option of a battery backed cache. 'Fake RAID' controllers have no cache RAM, using the machine's main RAM as a cache.
Note that IBM, Dell, HP and other server manufacturers also sell RAID controllers. In many cases these are rebadged components made by Adaptec or LSI.
If you want to buy a RAID controller on the cheap, identify some specific models of appropriate specification from various manufacturers' current and immediately previous generations. Then search for that particular model on ebay and get it secondhand.
With linux softraid you can make a RAID 10 array with only two disks.
Device names used below:
md0
is the old array of type/level RAID1.
md1
is the new array of type/level RAID10.
sda1
and sdb2
are new, empty partitions (without data).
sda2
and sdc1
are old partitions (with crucial data).
Replace names to fit your use case. Use e.g. lsblk
to view your current layout.
0) Backup, Backup, Backup, Backup oh and BACKUP
1) Create the new array (4 devices: 2 existing, 2 missing):
mdadm -v --create /dev/md1 --level=raid10 --raid-devices=4 /dev/sda1 missing /dev/sdb2 missing
Note that in this example layout sda1
has a missing counterpart and sdb2
has another missing counterpart. Your data on md1
is not safe at this point (effectively it is RAID0 until you add missing members).
To view layout and other details of created array use:
mdadm -D /dev/md1
Note! You should save the layout of the array:
# View current mdadm config:
cat /etc/mdadm/mdadm.conf
# Add new layout (grep is to make sure you don't re-add md0):
mdadm --detail --scan | grep "/dev/md1" | tee -a /etc/mdadm/mdadm.conf
# Save config to initramfs (to be available after reboot)
update-initramfs -u
2) Format and mount. The /dev/md1
should be immediately usable, but need to be formatted and then mounted.
3) Copy files. Use e.g. rsync to copy data from old RAID 1 to the new RAID 10. (this is only an example command, read the man pages for rsync)
rsync -arHx / /where/ever/you/mounted/the/RAID10
4) Fail 1st part of the old RAID1 (md0), and add it to the new RAID10 (md1)
mdadm /dev/md0 --fail /dev/sda2 --remove /dev/sda2
mdadm /dev/md1 --add /dev/sda2
Note! This will wipe out data from sda2
. The md0
should still be usable but only if the other raid member was fully operational.
Also note that this will begin syncing/recovery processes on md1
. To check status use one of below commands:
# status of sync/recovery
cat /proc/mdstat
# details
mdadm -D /dev/md1
Wait until recovery is finished.
5) Install GRUB on the new Array (Assuming you're booting from it). Some Linux rescue/boot CD works best.
6) Boot on new array. IF IT WORKED CORRECTLY Destroy old array and add the remaining disk to the new array.
POINT OF NO RETURN
At this point you will destroy data on the last member of the old md0 array. Be absolutely sure everything is working.
mdadm --stop /dev/md0
mdadm /dev/md0 --remove /dev/sdc1
mdadm /dev/md1 --add /dev/sdc1
And again - wait until recovery on md1
is finished.
# status of sync/recovery
cat /proc/mdstat
# details
mdadm -D /dev/md1
7) Update mdadm config
Remember to update /etc/mdadm/mdadm.conf
(remove md0).
And save config to initramfs (to be available after reboot)
update-initramfs -u
Best Answer
Commands which I have used to convert my RAID10 from IMSM to software RAID: