Your system is definitely underperforming based on your hardware specifications. I loaded the sysbench
utility on a couple of idle HP ProLiant DL380 G6/G7 servers running CentOS 5/6 to check their performance. These are normal fixed partitions instead of LVM. (I don't typically use LVM, because of the flexibility offered by HP Smart Array controllers)
The DL380 G6 has a 6-disk RAID 1+0 array on a Smart Array P410 controller with 512MB of battery-backed cache. The DL380 G7 has a 2-disk enterprise SLC SSD array. The filesystems are XFS. I used the same sysbench command line as you did:
sysbench --init-rng=on --test=fileio --num-threads=16 --file-num=128 --file-block-size=4K --file-total-size=54G --file-test-mode=rndrd --file-fsync-freq=0 --file-fsync-end=off --max-requests=30000 run
My results were 1595 random reads-per-second across 6-disks.
On SSD, the result was 39047 random reads-per-second. Full results are at the end of this post...
As for your setup, the first thing that jumps out at me is the size of your test partition. You're nearly filling the 60GB partition with 54GB of test files. I'm not sure if ext4 has an issue performing at 90+%, but that's the quickest thing for you to modify and retest. (or use a smaller set of test data)
Even with LVM, there are some tuning options available on this controller/disk setup. Checking the read-ahead and changing the I/O scheduler setting from the default cfq to deadline or noop is helpful. Please see the question and answers at: Linux - real-world hardware RAID controller tuning (scsi and cciss)
What is your RAID controller cache ratio? I typically use a 75%/25% write/read balance. This should be a quick test. The 6-disk array completed in 18 seconds. Yours took over 2 minutes.
Can you run a bonnie++ or iozone test on the partition/array in question? It would be helpful to see if there are any other bottlenecks on the system. I wasn't familiar with sysbench, but I think these other tools will give you a better overview of the system's capabilities.
Filesystem mount options may make a small difference, but I think the problem could be deeper than that...
hpacucli output...
Smart Array P410i in Slot 0 (Embedded) (sn: 50123456789ABCDE)
array A (SAS, Unused Space: 0 MB)
logicaldrive 1 (838.1 GB, RAID 1+0, OK)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 300 GB, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 300 GB, OK)
physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 300 GB, OK)
physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 300 GB, OK)
physicaldrive 2I:1:5 (port 2I:box 1:bay 5, SAS, 300 GB, OK)
physicaldrive 2I:1:6 (port 2I:box 1:bay 6, SAS, 300 GB, OK)
SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 50123456789ABCED)
sysbench DL380 G6 6-disk results...
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 16
Initializing random number generator from timer.
Extra file open flags: 0
128 files, 432Mb each
54Gb total file size
Block size 4Kb
Number of random requests for random IO: 30000
Read/Write ratio for combined random IO test: 1.50
Using synchronous I/O mode
Doing random read test
Threads started!
Done.
Operations performed: 30001 Read, 0 Write, 0 Other = 30001 Total
Read 117.19Mb Written 0b Total transferred 117.19Mb (6.2292Mb/sec)
1594.67 Requests/sec executed
Test execution summary:
total time: 18.8133s
total number of events: 30001
total time taken by event execution: 300.7545
per-request statistics:
min: 0.00ms
avg: 10.02ms
max: 277.41ms
approx. 95 percentile: 25.58ms
Threads fairness:
events (avg/stddev): 1875.0625/41.46
execution time (avg/stddev): 18.7972/0.01
sysbench DL380 G7 SSD results...
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 16
Initializing random number generator from timer.
Extra file open flags: 0
128 files, 432Mb each
54Gb total file size
Block size 4Kb
Number of random requests for random IO: 30000
Read/Write ratio for combined random IO test: 1.50
Using synchronous I/O mode
Doing random read test
Threads started!
Done.
Operations performed: 30038 Read, 0 Write, 0 Other = 30038 Total
Read 117.34Mb Written 0b Total transferred 117.34Mb (152.53Mb/sec)
39046.89 Requests/sec executed
Test execution summary:
total time: 0.7693s
total number of events: 30038
total time taken by event execution: 12.2631
per-request statistics:
min: 0.00ms
avg: 0.41ms
max: 1.89ms
approx. 95 percentile: 0.57ms
Threads fairness:
events (avg/stddev): 1877.3750/15.59
execution time (avg/stddev): 0.7664/0.00
I swear, when I read questions like this, I wonder why people are even using 4TB disks in inexpensive servers with low-end controllers. Do you really need 8TB-12TB usable in a four disk setup? What are you storing?!?
Either way, the Smart Array B110i controller is not compatible with drives larger than 2TB in RAID. It's not well-documented or noted online, but you've run into a product limitation. Remember, this controller predates the introduction of 4TB disks by a bit.
If you need to use those specific 3rd party disks, then you're going to have to swap controllers or use software RAID.
Best Answer
FreeNAS is pretty crappy when it comes to hardware support.
HP ProLiant systems aren't really well suited to FreeBSD and variants, so I tend to steer people to Linux or other support operating systems when using ProLiant server hardware. This is mainly due to hardware, driver and management agents support.
The HP array driver in FreeNAS is the likely cause of your issues with the Smart Carrier LED animation.
This works fine under Linux and HBA mode.