Bandwidth vs Throughput vs Data Rate – Key Differences

bandwidththroughput

I totally understand the difference between bandwidth and throughput. While bandwidth shows the maximum amount of data can be transmitted from a sender to a receiver, throughput is the actual amount of data that has been transmitted as they could be different factors such as latency affecting throughput.

Bit rate is the amount of data(number of bits) can be transmitted per second which sounds the same as throughput to me. So what is the key difference?

Best Answer

Some of these terms are used differently by different people, but below is what is generally accepted.

Bandwidth is the number of bits per second that a link can send or receive, including all flows. For example, the bandwidth of a 100 Mbps connections is 100 Mbps, but that doesn't mean it is always sending or receiving 100 Mbps, but that is the maximum possible on that link. Unlike what many people mean by bandwidth, it does not mean data usage. I see people say that they have a bandwidth limitation (every link does), and they have used all their bandwidth for the month. This is an incorrect use of the term. What they mean to say is that they have a data usage limitation, and they have used it up for the month.

Throughput is the amount of data during a time period that a flow (process to process) can send or receive. This includes all the host overhead, and contention on the link (multiple flows on a link will each use some percentage of the bandwidth, reducing the throughput of each).

Bit rate is closer to bandwidth, but it is often per host, or source to destination devices. You may have a bit rate of 100 Mbps from a host to a switch, but the bit rate from a host to a host is less. This usually includes multiple flows.

Related Topic