Depends on a number of things, I would think. Striping two drives into one does increase the odds of losing data in a failure, so a backup is a must.
It also depends on what data is written where and in what order, although it's probably fast because the OS and early-install stuff is striped across the SSD drive. Eventually larger data files will just hit the slower drive and the benefit will drop off a bit.
If you're just using it as a general-purpose home computer and aren't looking to increase reliability, then yeah, you'd get a hybrid performance machine that will not be as good as pure SSD in performance but better than two 7200 RPM disks.
Have I done it before? No. I wouldn't want to decrease reliability of my system and increase odds of data loss for the sake of faster drive performance. Soon enough using the system will turn into a baseline of performance and I'll forget that it's "faster", and by the time I notice newer drives and bus technologies will still leave me drumming my fingers wishing my word processor launched a fraction of a second quicker.
Okay. This is an interesting question, as there are a number of options available to you.
Some concepts to clarify and understand, as they relate to this situation:
- Perceptions of "speed" or "fast".
- RAID controller performance.
- SAS topology.
- Benchmarking a system and/or identifying bottlenecks.
In order to get the maximum performance, we really need each logical drive to run as fast as possible.
Storage performance is not always about bandwidth!! Latency, I/O read and write patterns, queuing, application behavior, caching, etc. are all factors. Given what you've described, you're nowhere near saturating the link to your storage.
The current HP server has a fairly low-end array card
No it doesn't. The Smart Array P410i controller is the onboard controller available on the G6 and G7 ProLiant servers. It performs just fine, as long as a battery-backed (BBWC) or flash-backed (FBWC) module is installed. It's limited to the internal bays of the server and has no SAS oversubscription. There are two SAS SFF-8087 4-lane connectors linking the motherboard to the backplane, each providing 6Gbps full-duplex bandwidth.
Currently, we're looking at something like a D2600 with a high-end
Smart Array card.
The other RAID controllers in HP's portfolio for that server generation perform similarly (Smart Array P411 and P812). They differ in that they provide more flexible or external connectivity. The D2600 enclosure would potentially be a step-down in raw throughput, depending on its configuration. However, it's absolutely the wrong choice for this setup, as it only accommodates large-form-factor 3.5" disks. The D2700 enclosure is the variant that houses small-form-factor 2.5" disks.
SSMS Activity Monitor and Perfmon show that most of the time the
server is waiting for the disk
This is an issue with the single 120GB SATA SSD you're using. I have one sitting here. It's a low-end, slow-ass SSD. That's all. It maxes out at ~180 Megabytes/second sequential and is just an overall poor performer. HP should not sell it! It's relatively low-latency, compared to spinning disks, but is terrible for what you're trying to do. It's worse that you only have one drive. Four of them would be acceptable.
I would recommend either a pair of 400GB MLC HP Enterprise disks (made by Pliant/Sandisk) if you are not planning much growth beyond the 200GB you're using now. Otherwise, four disks would be better. Unfortunately, they are not cost-effective ($2800US+ each).
When I don't use the HP Enterprise SSDs and need to consider cost, I purchase the Sandforce-based OWC Mercury Extreme Pro drives and place them in HP drive carriers. Works great, inexpensive and is a much better deal for the generation of hardware you're using. Use RAID 1+0 and follow the P410 SSD configuration guidelines from HP. I spend a lot of time with SSDs...
array B (Solid State SATA, Unused Space: 1012121 MB)
logicaldrive 3 (400.0 GB, RAID 1+0, OK)
physicaldrive 1I:1:3 (port 1I:box 1:bay 3, Solid State SATA, 480.1 GB, OK)
physicaldrive 1I:1:4 (port 1I:box 1:bay 4, Solid State SATA, 480.1 GB, OK)
physicaldrive 2I:1:7 (port 2I:box 1:bay 7, Solid State SATA, 480.1 GB, OK)
physicaldrive 2I:1:8 (port 2I:box 1:bay 8, Solid State SATA, 480.1 GB, OK)
SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 500143802335E8FF)
I have a few of these drives sitting here as I type...
Left to right: 400GB SAS MLC Enterprise SSD, 200GB SAS SLC Enterprise SSD, 120GB SATA MLC crap SSD
The rest of the items in your question are not an issue...
- You don't need external storage. External storage actually shares a 4-lane SAS connection (24Gbps == 4 x 6Gbps) back to the controller. The "multiple channels" you refer to are the same as "dual domain" or simply multipath SAS links. This is more of a resiliency feature rather than performance in this context. See: Using both expanders in HP D2700
- Internal disks are fine, as they each have dedicated 6Gbps links back to the P410i RAID controller.
- Your problem here is the SSD you're using. Even 4 300GB 10k RPM SAS drives will run better than the one HP SATA SSD you have now.
Further reading:
HP D2700 enclosure and SSDs. Will any SSD work?
Third-party SSD in Proliant g8?
Why are enterprise SAS disk enclosures seemingly so expensive?
Best Answer
I think that "The LSI card does not have a BBU (yet) so write-back is disabled" is the bottleneck.
If you have UPS - enable the Write-Back.
If not - try to get the BBU.
If you can't - you can either risk the data consistency on the virtual drive by loosing the cached data in case of power surge if you enable Write-Back or stick to these speeds using write through cache.
Even if you align the partition to the logical volume (which is normally automatically done by most modern OSes) and format the volume with optimized cluster/block size big enough (i think it should be 2mb in your case) to get all the drives to process the single IO request i don't think you will achieve very big write performance difference.
Because the write performance of the RAID5 is very over-headed process. And since it is write through the XOR processor don't have the whole data in cache to perform the parity calculations in real time i think
With Write-Back enabled cache on 4x320gb hdds 515kb stip sized RAID 5 i get average 250-350 MB/s write speed writing big sequential files or average 150 MB/s copying big files inside the virtual volume. (i still don't have BBU but i have and old apc 700VA smart ups so i think its enough to minimize the power surges and eventual cache loss to a lot)
Are we discussing 100% random, 100% sequential or some mixed pattern? I am mostly experiencing high speeds when i fully read, write or copy big files on/from/to my array. On the other hand as already said random writes (reads) are much lower variating from less than 1 mb/s up to 190 mb/s average speeds depending on the file sizes and/or request sizes. Mostly under the 20mb/s range in everyday small size/file uses. So it depends a lot from the applications in the real life random transfers. As i am using windows OS my volumes are pretty mush as de-fragmented and for big files big operations like copying from/to are pretty fast
And one suggestion as a solution to the slow read/write random speeds of normal hdds - if you get to the point of reconfiguring your whole controller configuration why don't you consider CacheCade using 1 or 2 of the SSDs for no-power-dependent raid cache (something like the adaptec hybrid raids) and the rest for your OS/app drive as you are using them now? This way you should be able to boost the speed of your raid 5 volume even with write through i think because the actual write to the physical hdds should take place in the background and as you are using write through cache (no on board controller cache) and the ssds as cache instead i think you should be worry free of system resets. But for actual and concrete information on how cachecade works please read lsi's documentation and even ask LSI's technical support as i haven't got the chance to use it yet.