Delays and throughput

bandwidththroughput

I have the following Questions regarding Delays and throughput

  1. Is transmission rate same as Bandwidth?
  2. What is throughput and while calculating throughput why do we
    consider transmission delay. Shouldn't we also consider the
    propagation delay?

I know the question might sound silly,but please help.

Best Answer

Is transmission rate same as Bandwidth?

In essence yes, with some small adjustments for the difference between the clocking of the media ("bandwidth") and the actual rate at which packets are accepted onto the media (ie, minus minor overheads like ethernet headers, SDH segmentation, forward-error control, tunnelling like VLANs, MPLS, GRE).

We can do "traffic shaping" in the router so that the packet transmission rate presented to the media is substantially less than the available transmission rate of the media. This is how you can order a 200Mbps service over a gigabit ethernet fibre.

What is throughput

Throughput is the speed visible to the application.

Often we aren't so much interested in the throughput of a link as the "end-to-end throughput" between two applications communicating over a path consisting of multiple links and routers. End-to-end throughput can be affected by the lowest transmission rate of the links on the path; the path error rate, packet re-ordering, latency and jitter affecting the TCP congestion-control algorithm; the path MTU and packet re-ordering affecting operating system efficiency; the choice of TCP congestion control algorithm used.

why do we consider transmission delay

We usually don't. We used to, since getting a 1.5KB of packet onto the wire took some time at 9600bps. But at the high data rates in modern networks the number is so small as to be safely ignored outside of weird situations.

Shouldn't we also consider the propagation delay?

Also called "latency". Latency remains important as it's the one performance factor in a global network which isn't improving rapidly. As everything else improves then avoiding latency becomes increasingly important to improving performance. This has effects from avoiding round-trips packet exchanges in application and protocol designs; fielding new protocols which move data closer to their endpoint; and architectural responses such as content distribution networks.