How to the Nyquist theorem for the maximum bit-rate of a noiseless channel be derived

bit ratecommunicationdataformula-derivationnoise

I've been given the formula:
$$ I = 2* H * log_2(L)$$
where:

\$ I \$ = Maximum data rate in bits per second for a noiseless channel

\$H\$ = Bandwidth that the channel will carry (that is, the range of frequencies, not the bit rate)

\$L\$ = Number of discrete levels in the signal

Could you explain to me where this formula comes from? How does it model the noiseless channel, and how is the formula derived?

I understand how it takes \$log_2(x)\$ bits to distinguish between \$x\$ number of things.

(Edited to corrected formula)

Best Answer

Note that the correct formula has a 2 in front of the H. A heuristic argument for the formula is that the 2H term represents the sampling rate needed to recover all of the information in the channel of bandwidth H. The other term represents the information contained in each sample, i.e. each sample can be one of L levels. Thus the product is the total information in the channel. In this formula, there is no limit to the number of levels since the channel is assumed noiseless so that the value of any size level, no matter how small, can be determined at the receiver. Thus an infinite data rate is allowed. If noise is present, this is not the case. Then the Shannon formula, which includes the signal-to-noise ratio in the channel, must be used. That formula does not allow for infinite data rates unless the signal-to-noise ratio is also infinite, i.e. noiseless.