They haven't been around long enough in enough quantities to develop an earned reputation. Flash-wear is the really big one everyone is concerned about, which is why the enterprise SSD drives allocate so many blocks to the bad-block store. Anandtech has run several articles about SSD's over the last couple months and they go into a lot of detail. From what I've read, stability problems are primarily in the consumer market where corners are being cut to bring prices down out of orbit. The SSD's you can buy to put in your fibre channel arrays are a completely different class than the OCZ drives. There is perhaps a much larger stability divide between consumer grade SSD's and enterprise SSD's than there are in consumer SATA drives and enterprise SATA drives.
For more information about enterprise SSDs like the Intel X25, Anandtech has several article about that. Their introductory article about the X25 practically gushed. On the desktop side a recent article about the OCZ Vertex went into some detail about how bad the consumer side of the SSD market really was, and linked to another article where the problem was originally identified in the tech media. In short, consumer-grade SSDs were tweaked to provide massive sequential I/O numbers with little regard to actual usage patterns. The OCZ Vertex is a consumer-grade SSD that can approach the Intel for performance, but it requires babying to get there. Again, none of these have been on the market long enough for outright failure rates to really emerge. It has only been in the last, oh, 6-8 months that consumer SSD's have gotten cheap enough for mass adoption.
Update 6/2011
Two years later, and we do have some feelings for this now. However, how they're used has evolved. SSDs are used in areas where outright performance can't be economically met with disks, so comparing reliability is something of an apples-to-pears comparison. For servers that need small storage, they usually don't also need high performance on that storage so rotational magnetic media is still used most of the time.
That said, some comparisons can be drawn. SSD are typically used in large storage arrays as the highest tier of performance. In this role I've heard anecdotal reports that SSDs last a lot shorter than the same disks in those arrays. Like, on the order of 10-18 months. This is reflected in the warranty the big storage vendors allow on SSDs.
This may look like "a lot less reliable", but in reality you have to look at it right. Modern top tier SSDs can handle I/O Operations per second into the six digits these days, reaching the performance of even one drive with 15K RPM disks will take well over a hundred spindles. More mid-grade SSDs can do 30-50K I/O Ops, which is still over a hundred 15K disks. Modern disk I/O systems can't keep up with speeds like this, which is why the big array vendors only allow a few SSDs per array relative to disks; they simply can't eek enough performance out of the entire system to keep those things fed.
So in reality, we're comparing a brace of (for example) 8 mid-grade SSDs versus 250 15K drives. Since this is enterprise storage, give them an 80% duty cycle. In the first year a couple of those 15K drives will definitely fail requiring replacement, possibly up to 20. Anecdotaly, half of the SSDs will fail. When looked at it like this, failure rate for performance given, SSDs still aren't up to HDs. When looked at it from an economic point of view, each SSD is worth 31.25 HDs, SSDs are markedly cheaper for the performance given so the increased failure rate is more acceptable since replacement-rate is still probably cheaper in the long run.
Looking at it another way, a direct apples-to-apples comparison, where you subject the same two devices to identical I/O loads over a period of time, SSDs are more reliable these days. Take a 15K drive and a mid-grade SSD (50K I/O Ops/s) and give them both a steady diet of 180 I/O ops, and it is more likely that the SSD will make it to 5 years without fault than the HD. It's a statistical dance to be sure, but that's where things are going now.
Hard-drives still have the edge in the drive-unit failure rate per GB of storage provided. However, this is not a market segment that SSD are intended to be competitive.
Unless your web-server is much busier or larger than is typical, the performance difference shouldn't be significant. The few areas where an SSD may outperform is if you're serving tens to hundreds of millions of files through the web and they're all on the same file-system. In that case the low latency of an SSD will shine through.
In the more typical "tens to hundreds of thousands" serving case the increased latency of the SAN should be markedly less than the latency already incurred by serving over the WAN. If you are concerned about latency, make sure your SAN contains lots of spindles instead of lots of space.
Best Answer
I can speak on the specifics of what you're trying to accomplish. Honestly, I would not consider an entry-level HP P2000/MSA2000 for your purpose.
These devices have many limitations and from a SAN feature-set perspective, are nothing more than a box of disks. No tiering, no intelligent caching, a maximum of 16 disks in a Virtual Disk group, low IOPS capabilities, poor SSD support, (especially on the unit you selected).
You would need to step up to the HP MSA2040 to see any performance benefit or official support with SSDs. Plus, do you really want to use iSCSI?
DAS may be your best option if you can tolerate local storage. PCIe flash storage will come in under your budget, but capacity will need to be planned carefully.
Can you elaborate on the specifications of your actual servers? Make/model, etc.
If clustering is a must-have, another option is to do the HP MSA2040 unit, but use a SAS unit instead of iSCSI. This is less costly than the other models, allows you to connect 4-8 servers, offers low-latency and great throughput, and can still support SSDs. Even with the Fibre or iSCSI models, this unit would give you more flexibility than the one you linked.