Kenny,
Navisphere has the intelligence to know that a FC hotspare cannot sub for a SATA hotspare (and vice-versa). Hotspares should be created from the LARGEST size disk of a given interface type (e.g. a system with 1 x 450Gb 15K drive and 30 x 300Gb 15K drives should use the 450GB drive as a hotspare). recommended ratio of hotspares to data drives is 1:30 (for FC) and 1:15 (SATA). These are gross recommendations only so, you can flex this how you want.
cheers,
dave graham
corp systems engineer - EMC Atmos
ethrbunny,
The VMware community post you mentioned in your edit is a bit outdated... Dell Equallogic recommendations for RAID policy are laid out in the Choosing a Member RAID Policy document though.
With 24x600GB disks, it's presumable you'll be using either 10K or 15K SAS drives. If that's the case, RAID 6, 10, and 50 are the recommended options. In accordance with what tim mentioned, an Equallogic storage array with 24 drives only lets you chose one RAID type/policy for an enclosure; the actual RAID sets are managed by the array on the back-end with no real user visibility through the GUI. Due to the number of drives, you're correct in being wary of having that many drives in a single RAID set, which is why the array splits them into two separate RAID sets (which is both better for data protection/redundancy and performance).
RAID6 is by far the best RAID policy to select for data protection, assuming that you have the system in warranty and drive failures will be dealt with promptly rather than ignored. The aforementioned document details the statistical likelihood of data loss between the different policies available, and RAID6 is a clear win by this measure.
Performance-wise, RAID6 suffers greatly with random writes in comparison to RAID10. It also experiences a greater performance impact during a failure/rebuild in comparison (though this is almost entirely negated with the copy-to-spare operation introduced with new firmware revisions for handling of preemptive failures).
If your current storage solution incorporates 16 or fewer drives of the same or lesser speed, I would nearly guarantee that a RAID6 policy would provide ample performance and IOPS for your needs in addition to the best capacity and protection level you can get on that array.
However, you could also consider setting up all of your volumes with thin provisioning, and allocate a max capacity for each volume that give you plenty of room to grow (even if that means over allocating to some degree). Start with a RAID10, get your full production environment in place, and then use the SAN Headquarters software provided by Equallogic to measure your performance (feel free to contact support or a technical sales rep for more info on this - they're usually very helpful). If your IOPS on individual drives is sitting below 100 even at your peak utilization, then you can easily get away with converting to a RAID6 to gain some extra capacity. The catch to this is that you cannot convert back from a RAID6 to a RAID10 without performing a factory reset on the array (which is only realistic in large multi-member environments) so make sure to do your research before making the switch.
Summary
RAID type recommendation: RAID 6 (verify this w/ Dell after having your capacity needs evaluated)
Volumes: 4TB volume for your database + 3+ volumes (perhaps 2TB in size) for VMFS datastores (multiples recommended for various performance reasons), all with thin provisioning enabled
Note 1: RAID10 on this array would give you just under 6TiB of actual usable capacity, while RAID6 would give you just over 10TiB (possibly a touch lower for each after space taken up by array's metadata)
Note 2: These recommendations are all assuming you don't plan to make much use of the replication or snapshot features. If you do, you'll need to take the additional space requirements into consideration as well (making RAID6 an even more favorable option)
Best Answer
You'll need to look at the properties of each RAID group to see how much space has been used on each RAID Group.
You can also get this info with the naviseccli from the command line.