ISCSI Transfer Rate Using Standard Gigabit Networking

gigabit-ethernetiscsistoragestorage-area-network

We just purchased a dell storage server with (12) 7200 RPM SATA drives running RAID 10 and (4) gigabit network interface cards. Additionally it has a PERC H700 controller card with 512MB of on-board cache. We will be attaching hypervisors to the dell server (block level storage), which will store virtual machines.

Our question is, using iSCSI, and a single gigabit connection between each hypervisor and the dell storage server, is it true that maximum theoretical transfer rate per hypervisor would be 1000 Megabits / 8 = 125 Megabytes a second? Or, am I completely wrong, and iSCSI does some sort of compression and is able to achieve higher I/O throughput rates.

125 Megabytes a second actually is somewhat slow seeing as we have 12 spindles and running RAID 10. What are some alternatives, besides fiber channel to remove the network bottleneck? We are aware of enabling jumbo frames, and will give that a try, anything else. What sort of performance should we expect using a single gigabit connection per hypervisor?

Best Answer

I'd say you'd be lucky to break 100MB/s. In theory, yes, you could transfer 1000Mbps, or 125MB/s, but between the various layers of overhead (Ethernet and IP headers, iSCSI itself, the fact that some time has to be spent between packets) you will never actually see that.

Also, don't forget that 125MB/s (or less) is what you'll see flying out of the NAS box; that has to be shared between all of the VM servers. Hence, don't expect to see that going to each VM server.

To speed things up, either go to 10Gbps networking (not cheap), or use Etherchannel/channel bonding/LACP/whatever your particular vendor likes to call it, and glue multiple 1Gbps links together to form a larger pipe. if that isn't an option (the 1RU servers I've seen have one expansion port), then you might want to consider an alternate protocol -- personally, I think ATA over Ethernet is a sadly neglected option if you're looking for a good SAN (as opposed to NAS) protocol.

Also, note that 7200RPM SATA drives really have poor performance, especially on random I/O, and, possibly more importantly, tend to have fairly annoying issues surrounding error handling (even the so-called "Enterprise" drives aren't what I'd call "adequate" in a high-performance SAN environment). I've managed a SAN that used these sort of drives, and quite frankly I'd spend the extra money on decent drives if I was going to do it again.