The write performance on that particular controller is usually poor unless you also have the battery unit for the cache. In addition, reconfiguring the array as a RAID 1+0 would give you the same amount of space and better overall performance.
Are you testing this from the ESXi console or from within a VM?
What you're seeing is the base disks attached to your machine. sda and sdb are the old ones, and sdc is the new one. This is normal.
However, you're using a fakeRAID from a motherboard controller that does not make its own hardware abstractions. It instead provides an interface that allows a driver (installed on the OS) to manage RAID. This has all of the drawbacks of a software RAID with all of the drawbacks of a hardware RAID.
As a result of this, you'll see all of the disks as they lie in the machine. However, the fakeRAID driver for your motherboard (IF installed and working) will create addressable RAID abstractions on top of the base disks. Because of this, while you can see sda and sdb, you should not be using them directly. You shoud instead be using the RAID abstraction, which will be presented as a block device of another name (such as /dev/disk/intr0).
HOWEVER, I see no evidence of this RAID abstraction having been created. It is almost certain that while you have a RAID set up in the BIOS, you do not have the necessary driver installed to actually DO anything with that orchestration. The result of this is that it simply does nothing (and you're using /dev/sda as a single disk). You're not actually running a RAID, far as I can tell. And you did provide enough information to determine that.
Sdb is blank because it's not mirrored with sda. Sdc is new. I would recommend that you no longer use motherboard fakeRAID at all, and instead use software RAID. Hardware RAID controllers are super crusty, and their prevalence has been due to Windows not having a proper software RAID system until only recently. Linux software RAID beats a hardware controller just about any day of the week, and has for a very long time.
While Linux MD RAID was not included in XenServer 5.6 and onwards, you do have LVM RAID (which gets way more support). You can add drives to a volume group (or storage pool or disk group, as some would call it) and then create logical volumes (basically partitions) that each have their own RAID policy when allocating and reading from any number of disks in that volume group. This is a great way to accomplish RAID, and it's even easier than using MD.
On top of all of this, I only just NOW realized that this question is from twenty-friggin'-twelve, but I refuse to undo all of this typing. Ideally, these words would help someone with their fakeRAID / software RAID woes. Use LVM. Profit. It's even the software RAID default now.
Best Answer
The devices should show up automatically in Linux under VMware. Check the output of
dmesg|tail
.If you've changed the size of the devices, you can rescan/recognize this with...
echo 1 > /sys/class/scsi_disk/0\:0\:0\:0/device/rescan
, where you substitute the SCSI disk ID.For example: