Assume you have the following situation...
- PC with the NIC hard-coded to 100Mbps, Full-duplex
- RJ45 cable, pinned EIA-568B (not that the colors in the pinout matter)
- Cisco Catalyst Switch, using autonegotiation (at 100Mbps)
Since the PC's NIC is locked at 100/full, the Cisco's Ethernet autonegotiation fails and it falls back to 100/half. Now there is a duplex mismatch on the line. The Cisco switch uses CSMA/CD for access to the link.
Let's assume the PC and the Cisco both transmit at exactly the same instant; the logical diagram and physical layer diagram show the same behavior from two different perspectives, but the physical layer diagram is most relevant to your question.
LOGICAL DIAGRAM
===============
Tx Tx
100/full -----> <----- 100/half
PC ----------------------------------- Cisco Catalyst Switch
PHYSICAL LAYER PIN DIAGRAM
==========================
PC Cisco Catalyst Switch
100/full 100/half
Tx D1
----->
568B 568B
Pin Signal Pin Signal
1 TX+ D1 --------------------------- 3 RX+ D2
2 TX- D1 --------------------------- 6 RX- D2
3 RX+ D2 --------------------------- 1 TX+ D1
6 RX- D2 --------------------------- 2 TX- D1
<------
Tx D1
In the diagrams above, the PC (full duplex) is on the left and the Cisco Switch (half duplex) is on the right. Both sides transmit (Tx) simultaneously on pins 1 and 2, this pair of pins is called D1.
When the NIC on the switch receives the PC's frame on the D2 pair while the switch is simultaneously transmitting on the D1 pair, the switch registers a collision (answer reference). The collision is only registered on the switch, because it is in half-duplex mode.
Notes about GigabitEthernet:
- Half-duplex is called out in the standardNote 1; however, nobody actually uses half-duplex GigE. This means that GE won't use CSMA/CD
- GE uses all 8 pins in the RJ45 mod plug, and the specific TX / RX pins are allocated dynamically.
End Notes:
Note 1 Quoting IEEE 802.3-2012 4.1 (Italic emphasis mine):
4.1.2.1.2 Reception without contention
In half duplex mode, at an operating speed of 1000 Mb/s, frames may be extended by the transmitting station under the conditions described in 4.2.3.4. The extension is discarded by the MAC sublayer of the receiving station, as defined in the procedural model in 4.2.9.
You're missing a historical perspective. WAN technologies such as frame relay and ATM were created to use existing telecommunications circuitry at a time when everything was based on, and needed to be compatible with, telephony technology. While these technologies are significantly slower than Ethernet, they provided data prioritization at a time when QoS was immature. Also, at the time, Ethernet was usually limited to 10 Mb
As speeds and technologies increased, long range Ethernet became possible and gradually replaced other WAN technologies.
ATM and frame relay are essentially obsolete technologies, although they are still used in some parts of the world that have been slower to upgrade to the latest speeds.
Best Answer
Ethernet started out as a protocol for coaxial shared medium networks. In the event of a collision the voltage levels on the cable would be wrong and the transceivers would detect this and tell the MACs. Any data received at this point would be garbage.
The receiving MAC would ignore any data coming in, and possibly increment a collision counter. The MAC has no reason to pass the garbage coming in from the PHY up to the host computer. The sending MACs would also see the collision, they would keep transmitting for the minimum frame time to ensure the collision was seen throughout the network. They would then wait for a random backoff before trying to send the frame again, hopefully the random backoff times used by the two sending MACs would be sufficiently different that one of them would succesfully send it's frame while the other would detect the line as busy and wait it's turn.
Now enter repeaters (aka hubs). A repeater has a number of PHYs but no MACs. When a repeater detects a collision it outputs a "Jam signal". This ensures that the collision is seen across the whole network.
So what about twisted pair? All common variants of twisted pair Ethernet are full duplex at the physical layer but for backwards compatibility and to support the use of hubs most of them have a "half duplex" mode. In half duplex mode simultaneous activation of transmitter and receiver is treated as a collision.
Putting that all together lets imagine you had a device plugged in to one of the spare ports on your hub that could capture and analyse low level Ethernet signals, what would it see?
You would see the start of a frame as normal. Then when the collision happened you would see a Jam signal from the hub. Once the Jam signal was over you would then see a period of idle line until one of the sender's random back-off timers expires and a retransmission begins.