Cisco – What are the best Cisco 6500 Ethernet modules for iSCSI

ciscoiscsi

I have a SAN environment built on FC (Fibre Channel), which is a standalone network from the rest of the DC Ethernet network, and have only Cisco 6509s with Sup720s available for this project.

Most SAN traffic would be from VMWare with web server VMs connecting to their back-end DB.

iSCSI traffic would be kept within a given module as the SAN (HP 3Par and HP EVA) best practice is to have isolated "switches" for the redundant paths, so I don't need to worry about traffic hitting the SUP (at least not yet)– so I guess DFC is needed — or traffic needing to flow off the switch and over any uplinks.

My questions are as follows:

  1. Which modules (1 or 10Gb) are preferred to handle iSCSI if we migrate away from FC?
  2. Are Jumbo Frames an unquestionable requirement?
  3. Is it better for these modules to sit in the aggregation layer or access layer in case there's a future need to tie them together (as we have an L2 access layer not directly tied to one another with L2 adjacencies between pairs of access switches that traverse the agg switches).
  4. Is iSCSI the better choice over FCoE?

The most important question is the first: Which modules are preferred? Please give specific Cisco modules or model numbers in answers.

Best Answer

Answering some of the generic questions:

Are Jumbo Frames an unquestionable requirement?

For FCoE it's somewhat irrelevant, as the FCoE frames are handled differently, their own default MTU is 2148 (frame size). You can do larger, but as with jumbo frames in general there's minimal benefit. An FCoE capable switch will just handle this.

Is it better for these modules to sit in the aggregation layer or access layer in case there's a future need to tie them together (as we have an L2 access layer not directly tied to one another with L2 adjacencies between pairs of access switches that traverse the agg switches).

Access really is the main place to use FCoE, replacing HBA's and edge FC switches with CNA's and 10g "converged ethernet" switches. Replacing distribution and core with FCoE can make sense when bandwidth needs justify it (or new build that has to run FC).

This is probably your sticking point actually, if you're not doing 10g to the machine you're almost certainly going to be very underwhelmed with the performance you'll get.

(Extra credit) Is FCoE the better choice over iSCSI?

Depends on what you're doing, almost certainly not, and there may be better options than either.

At a previous job we had a nice design using NFS (storage was NetApp's) where OS volumes were stored on a shared volume between the VM hosts (as normal), but data was individual NFS volumes, shared or separate as appropriate for the host, worked well in our environment as it was one group of admins managing ops from the app down to the hardware (well, its config, rack-n-stack, swaps, etc. was smarthands). Combined with NetApp's snapshots & snapmirror this made backup and restores trivial.