Electronic – Why is the Nyquist data rate lower than the Shannon data rate

samplingtheory

In the book Computer Networks, the author talks about the maximum data rate of a channel. He presents the Nyquist formula :

C = 2H log\$_2\$ V (bits/sec)

And gives an example for a telephone line :

a noiseless 3-kHz channel cannot transmit binary (i.e., two-level)
signals at a rate exceeding 6000 bps.

He then explain the Shannon equation :

C = H log\$_2\$ (1 + S/N) (bits/sec)

And gives (again) an example for a telephone line :

a channel of 3000-Hz bandwidth with a signal to thermal noise ratio of 30 dB
(typical parameters of the analog part of the telephone system) can
never transmit much more than 30,000 bps

I don't understand why the Nyquist rate is much lower than the Shannon rate, since the Shannon rate takes noise into account. I'm guessing they don't represent the same data rate but the book doesn't explain it.

Best Answer

To understand this you first have to understand that bits transmitted don't have to be purely binary, as given in the example for the Nyquist capacity. Lets say you have a signal that ranges between 0 and 1V. You could map 0v to [00] .33v to [01] .66v to [10] and 1v to [11]. So to account for this in Nyquist's formula you would change 'V' from 2 discrete levels to 4 discrete levels thus changing your capacity from 6000 to 12000. This could then be done for any number of discrete values.

There is a problem with Nyquist's formula though. Since it doesn't account for noise, there is no way of to know how many discrete values are possible. So Shannon came along and came up with a method to essentially place a theoretical maximum on the number of discrete levels that you can read error free.

So in their example of being able to get 30,000 bps, you would have to have 32 discrete values that can be read to mean different symbols.