Ethernet Collisions – How Ethernet Collisions Occur Despite Separate Tx and Rx Circuits

autonegotiationethernetieee-802.3xlayer1utp

I am trying to understand how a collision occurs in Ethernet, especially when a duplex mismatch exists or when on a legacy Ethernet network two nodes transmit simultaneously.

Everyone explains the collision in an upper level (two frames collide when the one is being sent and the other is being received). However, the graph below shows that there are different circuits for Rx and Tx. How a collision can happen since there are dedicated circuits for sending and receiving frames?

Different circuits are used for transmission and receipt

EDIT: Maybe the label "Hub MDI-X" causes some confusion regarding the point of my question. I am not asking how the functionallity of a hub can cause collision. My focus is on the communicaton between two nodes with either MDI or MDI-X interfaces (hub and switches have MDI-X interfaces). In any of these two cases, how a collision can happen between two nodes when they have duplex mismatch, whilst in duplex mismatch Rx and Tx have still their dedicated circuits?

Best Answer

To understand this you need to understand the historical context.

Originally Ethernet used a shared coaxial cable. Only one device could successfully transmit on this at a time. If two devices transmitted at the same time it was considered a collision.

Then repeaters came along, to extend the distance and increase the number of nodes. A repeater would detect which port is transmitting, then it would repeat that signal out on the other ports. To keep the collision detection working repeaters had to have some functionality for ensuring that all nodes detected a collision. The first repeaters only had two ports, but later repeaters could have multiple ports and these became known as hubs, especially when used in conjunction with twisted-pair wiring. Repeaters were pretty dumb devices, they would regenerate the electrical signals but little more.

Then 10BASE-T came in, which as you have noticed has dedicated data channels for each direction. Nevertheless it still needed to fit into the existing model, so by default it operated in a "half-duplex" mode where it emulated a coaxial cable. The signals did not in-fact collide on the wire but the transceivers acted as-if they did and the repeaters would take the same steps as before to ensure this was seen across the network.

Twisted pair Ethernet can also support a "full-duplex" mode. In this mode all of the collision-related hardware is disabled and both ends can transmit at any time. However this mode brought a couple of major downsides.

  • It was incompatible with repeater-hubs. Without the collision detection mechanisms hubs would have no way of handling two devices transmitting at the same time.
  • Both ends of a link to be set-up for the same duplex mode, if they are not then bad things will happen.

These issues meant that in practice 10BASE-T systems nearly always operated in half-duplex mode.

For 100BASE-TX the situation improved dramatically. Ethernet switches (technically fast multi-port bridges) came down in price to the point that dumb repeater hubs could be eliminated. Auto-negotiation allowed network cards to establish full-duplex connections without error-prone manual configuration. If you connect two 100BASE-TX NICs together with a crossover cable or connect a 100BASE-TX NIC to a switch and don't take steps to manually override things they will almost certainly negotiate full-duplex mode.

1000BASE-T theoretically has a half-duplex mode which some NICs claim to support and there was a specification for gigabit multiport repeaters, but I have never seen any evidence that anyone ever sold one. In practice a gigabit link will almost-certainly be running in full-duplex mode.

Faster speeds abandoned the half-duplex mode entirely.