We got this supermicro server comes with Intel RAID Controller (iCH9 etc).
I have enabled the RAID in BIOS and created a RAID 1 with 2 member disks (1TB SATA).
The installation went fine but the command: df -h and fdisk -l looks strange to me.
How come the Centos still sees 2 disks (sda sdb).
Just wondering do I have RAID1 running or not, if one disk failed, can I just plug a new one in and no down time for the server? Thanks.
Output:
[root@w11 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
854G 1.5G 809G 1% /
/dev/mapper/ddf1_4c53492020202020100000601000101347114711a3c8ce6cp1
99M 20M 75M 21% /boot
tmpfs 24G 0 24G 0% /dev/shm
[root@w11 ~]# fdisk -l
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 121454 975474832+ 8e Linux LVM
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 83 Linux
/dev/sdb2 14 121454 975474832+ 8e Linux LVM
Best Answer
It looks like you're probably using FakeRaid (Sorry the link is to an Ubuntu site, but the description is still relevant). In my experience, that often leaves the physical disks visible to the OS (whereas most true hardware raid controllers don't).
But df's output does seem to confirm that you're using RAID--or at least some sort of disk abstraction layer, since you haven't mounted sdaX or sdbX directly.