What is the waiting NIC referring to? Is it the NIC mentioned in the
previous sentence or another NIC which was involved in the collision?
That refers to any NICs waiting to send a frame, especially those involved in a collision.
The NIC is "requested to send" by the upper OSI layer (in other words,
by the node that it is attached to) ?
The NIC is at OSI layer-1. OSI layer-2 for ethernet is the MAC layer. Layer-3 would be IP. Layer-4 is the transport protocol, e.g. TCP or UDP. Above layer-4 are the application layers (off-topic here). An application will start sending data to the transport layer, that then sends through the network layer to the MAC layer that gives it to the NIC.
Why is the exponential back-off algorithm using multiples of the slot
time?
That makes sure that the back-off time is never less than the slot time, and it is quick and easy to calculate.
I understand that the slot time is computed such that a node is still
transmitting when the signal announcing the collision (jamming signal)
arrives at it. But what does this have to do with how much time nodes
should wait after a collision occurs?
That makes sure that all stations have heard the jamming signal, and no station will transmit while the jamming signal is still traversing the link. A host immediately next to the host doing the jamming will hear the signal before other hosts on the link, and it will stop hearing the jamming signal while it is still traversing the link.
Why should there be a maximum jamming time?
Jamming when it is no longer needed will slow collision recovery, and therefore, unnecessarily reduce throughput on the link.
sender should be transmitting for at least the 2*PT,where PT is end to end propagation delay.
Worst case is between senders that are located at opposite ends of a collision domain.
Imagine sender 1 transmitting at one end. Its signal propagates to the far end where sender 2 has just started transmitting as well. Sender 2 detects the collision, aborts its transmission and generates a jam signal.
Now, the jam signal needs to propagate all the way back to sender 1 who needs to still be sending in order to successfully detect the collision. If it had already finished sending the frame it would regard the transmission as successful - the collided frame wouldn't get resent, it'd be just lost.
So, the minimum frame size is required to exceed twice the maximum propagation time multiplied by the link speed.
Best Answer
Due to collision detection requirements, the collision domain for Fast Ethernet was significantly reduced: only two class-ii repeaters (96 bit-times delay or less) are allowed between any two nodes (or a single class-i repeater). Also, all Fast Ethernet variants are of the link-segment type (using duplex signaling with twisted pair or fiber), allowing for faster collision detection.
For Gigabit Ethernet, reducing the collision domain according to the single-repeater rule is not enough, so frames need to be extended to the full slot time (4096 bits) as minimum. Alternatively, multiple frames can be sent back-to-back without releasing the carrier.
While half-duplex operation was defined for Gigabit Ethernet (with a single repeater), repeaters and hubs failed to emerge - switches had become so cheap that the restrictions with half-duplex operation didn't make sense any more.
To understand the underlying mechanisms it might help to look at an earlier question and read the IEEE 802.3 specifications, especially Clauses 2, 4, 13, 29, and 42.