This all boils down to risk management. Doing a proper cost/risk analysis of your IT systems will help you figure out where to spend the money and what risks you can or have to live with. There's a cost associated with everything...this includes HA and downtime.
I work at a small place so I understand this struggle and the IT geek in me wants no single points of failure anywhere but the cost of doing that at every level is not a realistic option. But here are a few things that I've been able to do without having a huge budget. This doesn't always mean removing the single point of failure though.
Network Edge: We have 2 internet connections a T1 and Comcast Business. Planning on moving our firewall over to a pair of old computers running pfSense using CARP for HA.
Network: Getting a couple of managed switches for the network core and using bonding to split the critical servers between the two switches prevents a switch failure from taking out the entire data closet.
Servers: All servers have RAID and redundant power supplies.
Backup Server: I have an older system that isn't as powerful as the main file server but it has a few large sata drives in raid5 which takes hourly snapshots of the main fileserver. I have scripts setup for this to switch roles to be the primary file server should it go down.
Offsite Backup Server: Similar to the onsite backup we do nightly backups to a server over a vpn tunnel to one of the owners house.
Virtual Machines: I have a pair of physical servers that run a number of services inside of virtual machines using Xen. These are running off a NFS share on the main file server and I can do live migration between the physical servers if the need arises.
I think that "The LSI card does not have a BBU (yet) so write-back is disabled" is the bottleneck.
If you have UPS - enable the Write-Back.
If not - try to get the BBU.
If you can't - you can either risk the data consistency on the virtual drive by loosing the cached data in case of power surge if you enable Write-Back or stick to these speeds using write through cache.
Even if you align the partition to the logical volume (which is normally automatically done by most modern OSes) and format the volume with optimized cluster/block size big enough (i think it should be 2mb in your case) to get all the drives to process the single IO request i don't think you will achieve very big write performance difference.
Because the write performance of the RAID5 is very over-headed process. And since it is write through the XOR processor don't have the whole data in cache to perform the parity calculations in real time i think
With Write-Back enabled cache on 4x320gb hdds 515kb stip sized RAID 5 i get average 250-350 MB/s write speed writing big sequential files or average 150 MB/s copying big files inside the virtual volume.
(i still don't have BBU but i have and old apc 700VA smart ups so i think its enough to minimize the power surges and eventual cache loss to a lot)
Are we discussing 100% random, 100% sequential or some mixed pattern? I am mostly experiencing high speeds when i fully read, write or copy big files on/from/to my array. On the other hand as already said random writes (reads) are much lower variating from less than 1 mb/s up to 190 mb/s average speeds depending on the file sizes and/or request sizes. Mostly under the 20mb/s range in everyday small size/file uses. So it depends a lot from the applications in the real life random transfers. As i am using windows OS my volumes are pretty mush as de-fragmented and for big files big operations like copying from/to are pretty fast
And one suggestion as a solution to the slow read/write random speeds of normal hdds - if you get to the point of reconfiguring your whole controller configuration why don't you consider CacheCade using 1 or 2 of the SSDs for no-power-dependent raid cache (something like the adaptec hybrid raids) and the rest for your OS/app drive as you are using them now? This way you should be able to boost the speed of your raid 5 volume even with write through i think because the actual write to the physical hdds should take place in the background and as you are using write through cache (no on board controller cache) and the ssds as cache instead i think you should be worry free of system resets. But for actual and concrete information on how cachecade works please read lsi's documentation and even ask LSI's technical support as i haven't got the chance to use it yet.
Best Answer
You have a few PCIe form factors:
You have different card and/or bracket heights:
And different card lengths:
"Low-profile" is synonymous with half-height, half-length (HHHL).
An HBA like the HP H240 is a half-length card that comes with full-height and half-height brackets, and can fit into either type of server slot. If using with a Server like an HP DL360p Gen8, this give you options on card placement.