8Gb Fiber Channcel HBA vs. 10 Gb SPF+ Converged HBA

10gbethernetfiberhbaiopssharepoint-2010

I am putting a Dell server together, more specifically R720. I have to select the correct Host Bus Adapter. This HBA on R710 will connect to a storage device. I am confused between these two:

  • QLogic 2562, Dual Port 8Gb Optical Fiber Channel HBA (price $2,045)
  • QLogic 8262, Dual Port 10Gb SFP+, Converged Network Adapter (price $1,618)

I thought since the QLogic 2562 is a fiber channel and is more expensive then it is faster in terms of IOPS. But, it is a 8Gb as opposed to 10 Gb of SFP+.

My questions:

  • Which one is better (IOPS performance, etc.)?
  • Why should I choose one over another?

Best Answer

FC and 10GE use different bit encoding mechanisms, which dictates the maximum theoretical throughput for either. FC uses 8b/10b encoding while 10GE uses 64b/66b. What this means is that within the 8 Gbps for FC, 10 bits are sent for each byte of actual data. On the 8.5 Gbps (full underlying line rate of 8G FC) this comes out to 8.5 * 0.8 = 6.8 Gigabits per second. For 10GE this number ends up at 9.7 Gbps - or about 42% faster. There's some nominal amount lost in FCoE for Ethernet headers, of course, but it's a very small amount when compared to a 2.3k frame.

That said, the useful bandwidth of the 10GE FCoE can be shared with other network data, although there are environments that dedicate 10GE FCoE to -just- storage traffic. There are a few things to consider when looking at converged fabric, including:

1.) What's the actual amount of data crossing the notional FC link? Very, very few of the SAN's that I've seen (in some very large networks) even have a handful of consistently busy 4G ports, much less 8G. Most of the world would probably operate fine on 2G (..and much does).

2.) There are mechanisms in place with various implementations of DCB to guarantee lossless bandwidth to FCoE traffic. This means that if you set aside 4Gbps for storage traffic that this bandwidth will be available between the CNA and the switch under all circumstances - but in instances where the additional 6 Gbps is not otherwise in use that it will also be made available. By the same token, all 10Gbps is potentially available for normal data if said bandwidth isn't otherwise in use. The specifics of how these allocations is accomplished is going to be somewhat vendor dependent, but the overall behavior should be similar.

3.) Where do you break out the actual FC traffic to connect to the storage target (assuming said target isn't FCoE itself). The design of the intervening sections of your network will vary based on where the FC itself is broken out, requirements for multi-hop, etc.

Overall the speed king at the moment is 10G FCoE. This may change with the introduction of 16G FC - and, again, when 40G FCoE shows up. There's often a big win in terms of cabling, manageability, etc for FCoE - one connection to one port on one switch (x2 for redundancy) vs a completely separate infrastructure for traditional FC. FCoE is also generally managed just as normal FC is (same WWN setup, targets, zones, masking, etc).

As to IOPS - as mentioned above, this will likely be driven far more by the type of storage in use than the link in question.