Disk performance below expectations

hard driveioperformancewindows-server-2008-r2

this is a follow-up to a previous question that I asked (Two servers with inconsistent disk speed).

I have a PowerEdge R510 server with a PERC H700 integrated RAID controller (call this Server B) that was built using eight disks with 3Gb/s bandwidth that I was comparing with an almost identical server (call this Server A) that was built using four disks with 6Gb/s bandwidth. Server A had much better I/O rates than Server B.

Once I discovered the difference with the disks, I had Server A rebuilt with faster 6Gbps disks. Unfortunately this resulted in no increase in the performance of the disks. Expecting that there must be some other configuration difference between the servers, we took the 6Gbps disks out of Server A and put them in Server B. This also resulted in no increase in the performance of the disks.

We now have two identical servers built, with the exception that one is built with six 6Gbps disks and the other with eight 3Gbps disks, and the I/O rates of the disks is pretty much identical. This suggests that there is some bottleneck other than the disks, but I cannot understand how Server B originally had better I/O that has subsequently been 'lost'.

Comparative I/O information below, as measured by SQLIO. The same parameters were used for each test. It's not the actual numbers that are significant but rather the variations between systems. In each case D: is a 2 disk RAID 1 volume, and E: is a 4 disk RAID 10 volume (apart from the original Server A, where E: was a 2 disk RAID 0 volume).

Server A (original setup with 6Gpbs disks)

D: Read (MB/s)     63 MB/s
D: Write (MB/s)    170 MB/s
E: Read (MB/s)     68 MB/s
E: Write (MB/s)    320 MB/s

Server B (original setup with 3Gpbs disks)

D: Read (MB/s)     52 MB/s
D: Write (MB/s)    88 MB/s
E: Read (MB/s)     112 MB/s
E: Write (MB/s)    130 MB/s

Server A (new setup with 3Gpbs disks)

D: Read (MB/s)     55 MB/s
D: Write (MB/s)    85 MB/s
E: Read (MB/s)     67 MB/s
E: Write (MB/s)    180 MB/s

Server B (new setup with 6Gpbs disks)

D: Read (MB/s)     61 MB/s
D: Write (MB/s)    95 MB/s
E: Read (MB/s)     69 MB/s
E: Write (MB/s)    180 MB/s

Can anybody suggest any ideas what is going on here?

The drives in use are as follows:

Best Answer

You need to put less focus on the interface max speed and look more at the physical disk performance characteristics as this is typically the bottleneck. As described on this site for the Hitachi Hus153030vls300 300GB Server SAS disk you linked.

In terms of performance the important figures listed on the Hitachi pdf are

  • Data buffer (MB) 16
  • Rotational speed (RPM) 15,000
  • Latency average (ms) 2.0
  • Media transfer rate (Mbits/sec, max) 1441
  • Sustained transfer rate (MB/sec, typ.) 123-72 (zone 0-19)
  • Seek time (read, ms, typical) 3.6 / 3.4 / 3.4

As all of these figures mean the disk will not be able to saturate a 3 Gbps channel there is no point in it having a 6 Gbps channel.

I cannot imagine a raid controller that can utilise each disks' maximum performance in the same array at the same time. So assuming you have a RAID 1 with 2 disks, the first capable of 60MB/s sustained sequential read and write speed and the second only 50MB/s, then writing to the array will be limited to 50MB/s while a decent raid card will be able to have 2 simultaneous read streams, one at 60MB/s and the other at 50MB/s. The more complex the array the more complicated these figures become.

Some other notes

  • the maximum transfer rate of a disk is different in different areas of the disk, typically it is faster at the start of the disk.
  • sequential reads are the fastest sustained operations a disk can do and random read or writes are significantly slower.
  • typically a raid controller will disable a disks' onboard write cache and will only use its own cache for writes if it has a good battery, or you override its default.
  • I have read of some instances of some disk/raid firmware combo's that falsely detect a bad battery and disable all write cache. So update your firmware for both disk and raid controller

There are some disks advertised as 6 Gbps high performance disks that are in fact not that high performance, they just have the 6 Gbps interface, and couldn't even saturate a 3 Gbps link anyway (which would take 357 MiB/s).

The main benefit of 6Gbps sas/sata is for SSDs and port multipliers (ie attaching multiple disks to the 1 sas/sata port)

Related Topic