Building cheap SAN

fibre-channelstorage-area-network

Note: Some of you may ask, why not just use a cheap hardware solution. The solution is not so cheap in the country it has to be installed in, with as much as 100% import tax on high end technology. It's much cheaper to get a couple of new components in and use an existing server than to try to get a new server imported in.

Note 2: Please excuse the confusing FC and iSCSI, I see them as the same thing from an OS perspective. So either solution works.

I intend to build myself an "as cheap as possible" SAN with Fibre Channel. I'm less concerned about HA at this point, but may need to do HA down the road. The idea is that I need a decent iSCSI solution for VMWare vSphere to run virtual machines from.

I've been running vSphere from local storage, and some applications (like SQL Server) just work better with native drives rather than running on VFS.

My thought was that I could take a server I alrady have and plop in an FC HBA then run some software that will allow me to treat the drives as iSCSI targets (there are several choices there). And, if I use standard SATA drives, or possibly even SAS "near line" drives i can get a lot of storage for a lot less money, and can maybe get near 15K SAS drives running in RAID5 by running the slower drives in RAID10 and still save money.

However, I do have a few questions that I hope someone can answer as i'm not an FC guru:

1) Do I need an FC Switch if i'm just running between two servers? Can I do Point-to-Point with any two FC HBA's? Or does the HBA have to support that mode explicity?

2) if the HBA's have 2 ports on them, can i run 2 cables and get 2x performance seamlessly (Teaming)? Or do I have to assign some LUN's to one port and some to another?

3) How much CPU load does handling SAN requests take (obviously depends on the OS, but in general)? Can I take a lower spec server and put in a decent SATA controller with room for drives and an FC HBA and not really worry about CPU load much (assuming it does nothing else)?

4) Is this even a good idea?

5) Would GigE make more sense with multiple NIC teaming? I mean, could I put in a couple of 4 or 8 port E1000's and approach a dual port 2 or 4Gb FC in terms of performance?

Best Answer

You should probably just use a FC SAN; if you have FC Storage already, you can grab an FC Switch and FC HBAs for the host servers; that will be enough to get a simple FC SAN going.

You can use a server to act as a FC-iSCSI gateway; software like Starwind or SanMelody run on Windows; or most Linux distros come with iSCSI Target software.

Handling the SAN IO will depend on whether it's IOps or MBps you're looking for. More IOps requires more CPU usage to process each; both require as much cache and the fastest buses you can find.

GigE is just as fast as FC; but there's more overhead with GigE usually. iSCSI runs on TCP/IP and that adds a lot of overhead. The Ethernet frame is very comparable to the FC frame however; so technologies like ATAoE and HyperSCSI can keep up, with the caveat that a single 4Gbps FC can do a single stream at that speed, and 4 GigE would need 4 connections to saturate the fabric (theoretically; real life is always a little worse).

So you talk about SATA and SAS drives... do you have a FC to SAS controller that you're planning on plugging into a FC-iSCSI Gateway Server, that's then relayed to the Host servers?? That's a lot of overhead and will pretty much kill performance.

If you want to use SAS hardware you can create a SAS SAN cheaper yet (and no FC or iSCSI); drives plug into it, SAS HBA on the servers, and the SAN chops the drives into "LUNs". Products like the HP MSA2000sa will do this. This is more limited than FC or iSCSI; but for simple environments is very cost effective.

Related Topic