Switch Bandwidth – Managing Bandwidth on a Switch

bandwidthmanagementswitch

How is bandwidth managed in an unmanaged network switch?

Suppose that port 1 is connected to a a very busy server. If suddenly machines connected to ports 2, 3 and 4 decide to send a massive amount of packets (in an extreme scenario: at maximum theoretical throughput) to the server in port 1, how will the switch handle this?

Will the switch perform any sort of flow control, ensuring that each sending port gets a fair share of the available bandwidth to port 1, or will they simply be served in a "first come, first served" manner (obviously, dropping packets as necessary)?

Best Answer

In your scenario, most ethernet frames will be dropped, and it is up to the upper-layer protocols, e.g. TCP, to handle that. There is a rudimentary ethernet flow control, but it is poorly supported. Switches only have tiny buffers, and a 3:1 bandwidth over-subscription will really cause a 3:1 frame drop rate.

Ethernet (layer-2 frames), and IP (layer-3 packets) will really play no part in this. TCP (layer-4 segments) guarantees delivery by requesting dropped segments be resent and shrinking the TCP windows. UDP, or other connectionless protocols, will need an application to request lost data to be resent.

Related Topic