Lvm – RAID0 (both hardware and md) slower than LVM

benchmarklvmraid

I'm doing some benchmarking of nodes as we build out a distributed filesystem. Since files are going to be distributed and replicated across many nodes, we're using raid0 on the nodes themselves. However, I'm getting some odd performance numbers, and I'm curious if the StackOverflow community can help figure out why. I'm using fio as my benchmarking tool; note that version 1.38 did not work for me and I needed to use version 1.59 or 1.60.

Here is my fio configuration file:

[global]
directory=/mnt/gluster
lockfile=readwrite
ioengine=libaio
iodepth=8
rw=randrw
nrfiles=200
openfiles=30
runtime=900

[file]
filesize=16-8k/8M-512M
size=1GiB

For a raid0 using software raid, I got the following results (snipping to the essentials):

  read : io=285240KB, bw=324535 B/s, iops=79 , runt=900011msec
  write: io=283532KB, bw=322592 B/s, iops=78 , runt=900011msec

On a raid1 using software raid, I got the following results:

  read : io=683808KB, bw=778021 B/s, iops=189 , runt=900000msec
  write: io=488184KB, bw=628122 B/s, iops=153 , runt=795864msec

Single disk performance still beat the raid0 performance:

  read : io=546848KB, bw=622179 B/s, iops=151 , runt=900018msec
  write: io=486736KB, bw=591126 B/s, iops=144 , runt=843166msec

LVM Striped across the four disks with 4k extents:

  read : io=727036KB, bw=827198 B/s, iops=201 , runt=900007msec
  write: io=489424KB, bw=604693 B/s, iops=147 , runt=828800msec

Hardware RAID0 (HighPoint RocketRaid 2470)

  read : io=326884KB, bw=371918 B/s, iops=90 , runt=900008msec
  write: io=328824KB, bw=374125 B/s, iops=91 , runt=900008msec

Note that the first four results above are running just on the motherboard's SATA controllers. However, I reproduced the results with software RAID after moving to the RocketRaid card. These are 1tb SATA drives. Running multithreaded tests delivered about the same results. Is there any reason that RAID0 would be running this slow? I thought it would deliver performance in random I/O that was superior to a single drive or RAID1.


Follow-up: Based on some suggestions from Joe at Scalable Informatics (nice guy, buy his stuff!), I changed my test to use a deeper queue and a more random block size.

[global]
directory=/mnt/glusterfs
lockfile=readwrite
ioengine=libaio
iodepth=32
rw=randrw
numjobs=8

[file]
filesize=16-8k/8M-512M
blocksize_range=64k-1M
size=1GiB
nrfiles=50
openfiles=8
runtime=900

And the end result is that the HighPoint RocketRaid 2740 card sucks.

  • Single Disk Performance (SuperMicro motherboard SATA): 43.2mb/s read, 42.6 mb/s write
  • MD Raid 0 (RocketRAID used as HBA, with or without drivers loaded): 53.1 mb/s read, 54.1 mb/s write
  • RocketRaid Raid 0: 29.4 mb/s read, 29.2 mb/s write
  • MD RAID 0 (Motherboard SATA): 58.0 mb/s read, 58.3mb/s write

Performance connected to a RocketRaid card was slower across the board.

I'm going to leave this question open — We're getting a new slate of RAID cards in within the next week or two in order to test, and I'm still looking for optimizing tips on getting more than single-disk performance, which doesn't seem to have been thoroughly answered.

Best Answer

You may be running into a stripesize issue where the written or read data is hotspotting a single disk - see the software RAID howto for details. You could check out if this is the case by looking at the output of iostat.

If you would like to check for linear access performance, try using "hdparm -t" or dd for benchmarking - this should show performance numbers which approximately double a singe disk's performance.

Related Topic