- I'm pretty sure there are BIOS checks on the drives, there are on other, older IBM disk arrays
2 They don't supply 2TB disks for it yet, but they likely will do and it will almost certainly support them
3 Yes, you can put in as many or as few disks, though obviously less than 5-6 it doesn't make much sense to be using it
4 The DS3400 is the identical array with a fibre channel interface, it's benchmarked using SPC-1 and SPC-2 here, the performance will be very similar
5 The server's SAS card sees the RAID controller in the DS3200 rather than the raw disks. The presentation is controlled via the IBM Storage Manager client software which connects over TCP/IP. You can build multiple RAID arrays on the DS3200, which would each appear as single disks to the server.
6 The full configuration guide is available as an IBM Redbook and includes multiple screenshots
7 The DS3200 internal controller handles the RAID
The IBM DS3000 series arrays are pretty good at what they do, they're also pretty dumb compared to most other arrays out there, but they are cheap. It's based on an LSI model, Dell sell an essentially identical MD3000 disk array.
Hope that all helps
I think that "The LSI card does not have a BBU (yet) so write-back is disabled" is the bottleneck.
If you have UPS - enable the Write-Back.
If not - try to get the BBU.
If you can't - you can either risk the data consistency on the virtual drive by loosing the cached data in case of power surge if you enable Write-Back or stick to these speeds using write through cache.
Even if you align the partition to the logical volume (which is normally automatically done by most modern OSes) and format the volume with optimized cluster/block size big enough (i think it should be 2mb in your case) to get all the drives to process the single IO request i don't think you will achieve very big write performance difference.
Because the write performance of the RAID5 is very over-headed process. And since it is write through the XOR processor don't have the whole data in cache to perform the parity calculations in real time i think
With Write-Back enabled cache on 4x320gb hdds 515kb stip sized RAID 5 i get average 250-350 MB/s write speed writing big sequential files or average 150 MB/s copying big files inside the virtual volume.
(i still don't have BBU but i have and old apc 700VA smart ups so i think its enough to minimize the power surges and eventual cache loss to a lot)
Are we discussing 100% random, 100% sequential or some mixed pattern? I am mostly experiencing high speeds when i fully read, write or copy big files on/from/to my array. On the other hand as already said random writes (reads) are much lower variating from less than 1 mb/s up to 190 mb/s average speeds depending on the file sizes and/or request sizes. Mostly under the 20mb/s range in everyday small size/file uses. So it depends a lot from the applications in the real life random transfers. As i am using windows OS my volumes are pretty mush as de-fragmented and for big files big operations like copying from/to are pretty fast
And one suggestion as a solution to the slow read/write random speeds of normal hdds - if you get to the point of reconfiguring your whole controller configuration why don't you consider CacheCade using 1 or 2 of the SSDs for no-power-dependent raid cache (something like the adaptec hybrid raids) and the rest for your OS/app drive as you are using them now? This way you should be able to boost the speed of your raid 5 volume even with write through i think because the actual write to the physical hdds should take place in the background and as you are using write through cache (no on board controller cache) and the ssds as cache instead i think you should be worry free of system resets. But for actual and concrete information on how cachecade works please read lsi's documentation and even ask LSI's technical support as i haven't got the chance to use it yet.
Best Answer
I had the same question 2 months ago. After sending in a failed disk, the replacement disk failed in my NAS after 3 days. So I decided I would now test the new replacement before putting it in production. I do not test every new disk I buy, only on 'refurbished' disks, which I do not completely trust.
If you decide you want to test these disks I would recommend running a badblocks scan and an extended SMART test on the brand new hard disk.
On a 2TB disk this takes up to 48 hours, The badblock command writes the disk full with a pattern, then reads the blocks again to see if the pattern is actually there, and will repeat this with 4 different patterns.
This command will probably not actually show up any bad blocks on a new disk, since disks reallocate bad blocks these days.
So before and after this I ran a smart test, and check the reallocated and current pending sector count. If any of these have gone up, your disk has some bad blocks already and so might prove untrustworthy.
After this I run an extended SMART test again.
You might want to install smartctl or smartmontools first.
Warning, the badblocks -w flag will overwrite all data on your disk, if you just want to do a read check, without overwriting the disk, use
badblocks -vs /dev/sdX
If after this your smart values seem ok I would trust the disk.
To know what each smart value means, you can start looking here
http://en.wikipedia.org/wiki/Self-Monitoring,_Analysis,_and_Reporting_Technology