First of all, LVM configuration and RAID settings should be two independent decisions. Use RAID to set up redundancy and tweak performance, use LVM to build the volumes you need from the logical disks that RAID controller provides.
RAID0 should not appear in your vocabulary. It is only acceptable as a way to build fast storage for data that nobody cares about if it blows up. The need for it is largely alleviated by the speed of SSDs (enterprise-class SSD can do 10+ times more IOPS than the fastest SAS hard disk, so there's no longer need to spread the load over multiple spindles), and, should you ever need it, you can also achieve the same result with LVM striping, where you have much more flexibility.
RAID1 or RAID10 doesn't make much sense with SSDs, again, because they are much faster than regular disks, you don't need to waste 50% of your space in exchange for performance.
RAID5, therefore, is the most appropriate solution. You lose a bit of space (1/6th or 1/4th), but gain redundancy and peace of mind.
As for LVM, it's up to you do decide how to use the space you get after creating your RAID groups. You should use LVM as a rule, even in its simplest configuration of mapping one PV to one VG to one LV, just in case you need to make changes in the future. Besides, fdisk is so 20th century! In your specific case, since most likely it'll be single RAID group spanning all disks in the server, you won't be joining multiple PVs in a VG, so striping or concatenating don't figure in your setup, but in the future, if you move to larger external arrays (and I have the feeling that eventually you will), you'll have those capabilities at your disposal, with minimal changes to your existing configuration.
Potential Issues
I have a couple points of issue with using SSDs for production databases at the present time
- The majority of database transactions on a the majority of websites are reads not writes. As Dave Markle said, you maximize this performance with RAM first.
- SSDs are new to the mainstream and enterprise markets and no admin worth his salt is going to move a production database that currently requires 15K RPM U320 disks in RAID5 communicating via fibrechannel to an unproven technology.
- The cost of the research and testing of moving to this new technology, vetting it in their environment, updating the operating procedures, and so forth is a larger up front cost, both in terms of time and money, than most shops have to spare.
Proposed Benefits
That said, there are a number of items, at least on paper, in favor of SSDs in the future:
- Lower power consumption compared to a HDD
- Much lower heat generation
- Higher performance per watt compared to a HDD
- Much higher throughput
- Much lower latency
- Most current generation SSDs have on the order of millions of cycles of write endurance, so write endurance is not an issue as it once was. See a somewhat dated article here
So for a given performance benchmark, when you factor total cost of ownership including direct power and indirect cooling costs, the SSDs could become very attractive. Additionally, depending on the particulars of your environment, the reduction in the number of required devices for a given level of performance could also result in a reduction of staffing requirements, reducing labor costs.
Cost and Performance
You've added that you have a cost constraint under $50K USD and you really want to keep it under $10K. You've also stated in a comment that you can get some "cheap" SSDs, eluding that the SSDs will be cheaper than the DBAs or consultants. This may be true depending on the number of hours you would need a DBA and whether it is a reoccuring cost or not. I can't do the cost analysis for you.
However, one thing you must be very careful of is the kind of SSD you get. Not all SSDs are created equal. By and large the "cheap" SSDs you see for sale in the $200-400 dollar (2008/11/20) are intended for low power/heat environments like laptops. These drives actually have lower performance levels compared to a 10K or 15K RPM HDD - especially for writes. The enterprise level drives that have the killer performance you speak of - like the Mtron Pro series - are quite expensive. Currently they are around:
- 400 USD for 16GB
- 900 USD for 32GB
- 1400 USD for 64GB
- 3200 USD for 128GB
Depending on your space, performance, and redundancy requirements, you could easily blow your budget.
For example, if your requirements necessitated a total of 128GB of available storage then RAID 0+1/10 or RAID 5 with 1 hotspare would be ~$5600
If you needed a TB of available storage however, then RAID 0+1/10 would be ~$51K and RAID 5 with 2 hotspares would be ~$32K.
Big Picture
That said, the installation, configuration, and maintenance of a large production database requires a highly skilled individual. The data within the DB and the services provided from that data are of extremely high value to companies with this level of performance requirements. Additionally, there are many things that just cannot be solved by throwing hardware at the problem. An improperly configured DBMS, a poor database schema or indexing strategy can /wreck/ a DB's performance. Just look at the issues Stackoverflow experienced in their migration to SQL Server 2008 here and here. The fact of the matter is, a database is a strenuous application on not only disk but RAM and CPU as well. Balancing the multi-variate performance issue along with data integrity, security, redundancy, and backup is a tricky bit.
In summary, while I do think any and all improvements to both the hardware and software technology are welcomed by the community, large scale database administration - like software development - is a hard problem and will continue to require skilled workers. A given improvement may not reap the labor reduction costs you or a company might hope for.
A good jumping point for some research might be Brent Ozar's website/blog here. You might recognize his name - he's the one who has assisted the stackoverflow crew with their MS SQL Server 2008 performance issues. His blog and resources he links to offer quite a bit of breadth and depth.
Update
Stackoverflow themselves are going the consumer SSD-based route for their storage. Read about it here: http://blog.serverfault.com/post/our-storage-decision/
References
Best Answer
They haven't been around long enough in enough quantities to develop an earned reputation. Flash-wear is the really big one everyone is concerned about, which is why the enterprise SSD drives allocate so many blocks to the bad-block store. Anandtech has run several articles about SSD's over the last couple months and they go into a lot of detail. From what I've read, stability problems are primarily in the consumer market where corners are being cut to bring prices down out of orbit. The SSD's you can buy to put in your fibre channel arrays are a completely different class than the OCZ drives. There is perhaps a much larger stability divide between consumer grade SSD's and enterprise SSD's than there are in consumer SATA drives and enterprise SATA drives.
For more information about enterprise SSDs like the Intel X25, Anandtech has several article about that. Their introductory article about the X25 practically gushed. On the desktop side a recent article about the OCZ Vertex went into some detail about how bad the consumer side of the SSD market really was, and linked to another article where the problem was originally identified in the tech media. In short, consumer-grade SSDs were tweaked to provide massive sequential I/O numbers with little regard to actual usage patterns. The OCZ Vertex is a consumer-grade SSD that can approach the Intel for performance, but it requires babying to get there. Again, none of these have been on the market long enough for outright failure rates to really emerge. It has only been in the last, oh, 6-8 months that consumer SSD's have gotten cheap enough for mass adoption.
Update 6/2011
Two years later, and we do have some feelings for this now. However, how they're used has evolved. SSDs are used in areas where outright performance can't be economically met with disks, so comparing reliability is something of an apples-to-pears comparison. For servers that need small storage, they usually don't also need high performance on that storage so rotational magnetic media is still used most of the time.
That said, some comparisons can be drawn. SSD are typically used in large storage arrays as the highest tier of performance. In this role I've heard anecdotal reports that SSDs last a lot shorter than the same disks in those arrays. Like, on the order of 10-18 months. This is reflected in the warranty the big storage vendors allow on SSDs.
This may look like "a lot less reliable", but in reality you have to look at it right. Modern top tier SSDs can handle I/O Operations per second into the six digits these days, reaching the performance of even one drive with 15K RPM disks will take well over a hundred spindles. More mid-grade SSDs can do 30-50K I/O Ops, which is still over a hundred 15K disks. Modern disk I/O systems can't keep up with speeds like this, which is why the big array vendors only allow a few SSDs per array relative to disks; they simply can't eek enough performance out of the entire system to keep those things fed.
So in reality, we're comparing a brace of (for example) 8 mid-grade SSDs versus 250 15K drives. Since this is enterprise storage, give them an 80% duty cycle. In the first year a couple of those 15K drives will definitely fail requiring replacement, possibly up to 20. Anecdotaly, half of the SSDs will fail. When looked at it like this, failure rate for performance given, SSDs still aren't up to HDs. When looked at it from an economic point of view, each SSD is worth 31.25 HDs, SSDs are markedly cheaper for the performance given so the increased failure rate is more acceptable since replacement-rate is still probably cheaper in the long run.
Looking at it another way, a direct apples-to-apples comparison, where you subject the same two devices to identical I/O loads over a period of time, SSDs are more reliable these days. Take a 15K drive and a mid-grade SSD (50K I/O Ops/s) and give them both a steady diet of 180 I/O ops, and it is more likely that the SSD will make it to 5 years without fault than the HD. It's a statistical dance to be sure, but that's where things are going now.
Hard-drives still have the edge in the drive-unit failure rate per GB of storage provided. However, this is not a market segment that SSD are intended to be competitive.