I think it would depend on which mode you picked (switch dependent/switch independent) and how your traffic is transferred. Client->Server Server-> Client. If you are in switch independent mode you only have 1 port for inbound on the server and all ports will do outbound. I guess it just depends on how your test is running is it coming from the server or to the server.
First question:
What are the measured metrics used to determine your utilization (ie: what's "high" and "clogged"? I'm assuming this was done on the server only as you can't pull stats from unmanaged switches.
Second:
- I would stay away from multiple nics on different subnets on the server.
- I would stay far away from the "introduce some small switches" option
- I would definitely upgrade the switches. I would also recommend against NetGear switches. Most of the unmanaged switches and the Netgear "enterprise switches" have limited buffer and still suffer from performance issues
- Most modern NIC's in the server space can do some form of network teaming/bonding based on the drivers software suite. Almost all of them work fine in Windows. BACS (Broadcom Advanced Control Suite) and the Intel ACS (Advanced Network Services) are two of the more common ones).
edit to reply to your comment below as my reply was too long for a comment:
My question was trying to determine why you assume your network utilization is "already high" and "clogged". If your real problem is a design flaw (network loops, etc), a configuration issue (speed/duplex mismatches, etc), or an issue with the switches (buffer overruns/dropped packets/etc) then providing more bandwidth to the server won't help you. With your current hardware, it's hard to narrow down specifically where your issue is.
Where did you run your trace from? A wireshark trace that you say show nothing unusual isn't proof your "network traffic" is already high or getting "clogged". And not to be too harsh, but do you know what you are looking for in the trace? Based on your question and your current line of thinking, I don't know where your knowledge level is on troubleshooting a network trace to see if there are problems.
Running perfmon counters on the server NIC would give you a better picture of the utilization on the server NIC and would be only the first indication that more bandwidth to the server might be helpful. But you haven't said if you have run those counters or not.
Lastly, most driver software can do some form of network teaming with just about any switch (including unmanaged swtiches). Generally, you are limited to straight failover or transmit load balancing only. Failover is just as it sounds. Only one NIC is used until a failure is detected then it fails over to the other NIC. Transmit load balancing will only load balance outbound traffic from the system. Incoming traffic is still generally limited to a single NIC. I believe Broadcom can do SLB (Smart Load Balancing) where it attempts to do limited receive load balancing through gratuitous ARP's but I've never used it much. Full LaCP aggregation will require the switch to support it. There's more to it than all this, but this isn't a question on NIC teaming types and support.
Best Answer
LACP does not necessary increase bandwidth! If there's only one TCP connection you'll get fault tolerance but no performance boost! See good story wrap up here:
http://www.hp.com/rnd/library/pdf/59692372.pdf
In your case you have to configure SMB Multichannel and in this case SMB redirector will push data over multiple independent "pipes" aka TCP connections on Round Robin basis. That's going to give you both fault tolerance and bandwidth increase. See here:
https://technet.microsoft.com/en-us/library/dn610980.aspx