Switch – Virtualization – Ten 1Gbps links or one 10Gbps link? (Performance)

networkingperformanceperformance-tuningswitch

I have a machine that has several VM's (5) and 3 physical networking cards (with each 2 ports), with a total of six 1Gbps ethernet ports.

I have a SPF-capable switch, having a total of 48Gbps bandwidth and a 10Gbps SPF link. The server has also one SPF port (10Gbps).

I'm curious what would the best setup be, performance wise (get the most out of every bit, least cpu usage) and why.

Would it be better to have all the VM's connected to the one SPF port then to the SPF port on the switch, or should I get 5 ethernet cables and connect them to 5 ports on the network switch?

If it's still a bit unclear, imagine this scenario:

Two PC's on the Switch want respectively to download a large file from VM A and the second pc from VM B. If they are connected with ethernet, each one will have it's own connection, so the connection from VM A will be switched to PC A, and simultinously the connection from VM B will be switched to PC B, is that right? And if you would connect both VM's to SPF, then the SPF port would be switching between PC A and B.

So which scenario would perform the best at maximum load? Why?

Edit:
I wanted to keep this fairly generic so it could be applied to a global scenario, but details have been asked of the setup, here they are:

Server: PowerEdge T620
SPF Card: PEX10000SFP 10 gigabit
NICs: 3x NetXtreme BCM5720
OS: XenServer 6.2
CPU: Xeon E5-2609
Switch: T1600G-28TS
Guest OS's: Debian Wheezy (PV)

Best Answer

1 x 10Gb link for performance.

Otherwise if a single server needs to use 1.1Gbs to another server it can't because most load balancing systems use destination MAC or IP (Which would be the same).

This also eliminates issues where links are busier then other links because of the same fact, if the hash works out to be on the same link they end up on the same link except in special dynamic switch configs in VMWare