Electronic – Does the fundamental frequency affect the bit rate on a wire

bandwidthbit ratefrequencysignal

This is from the book Computer Networks by Tennenbaum:

The bandwidth is still the width of the band of frequencies that are
passed, and the information that can be carried depends only on this
width and not on the starting and ending frequencies.

So what I understand from this is, if I have a bandwidth of say 500
kHz, even if my base frequency is 1MHz or 1GHz, the bit rate on the same wire will be same.

However then Tennenbaum gives an example, he calculates the bit rate he can get from an ordinary phone line, first states:

An ordinary telephone line, often called a voice-grade line, has an
artificially introduced cutoff frequency just above 3000 Hz.

And then he gives this table:

Header definitions:

  • Bps = Given Bit Rate [Bit per Second]
  • T (msec) = Time needed to send 8 bits
  • First Harmonic (Hz) = The lowest frequency signal that could be made. It corresponds to byte 11110000 for examples.
  • #Harmonics sent = The highest multiple of the First Harmonic frequency that is lower than 3,000Hz

enter image description here

Well it is clear that there is a relationship between the data rate and the harmonics. It seems like (from the table), if you want more bit rates, you need a higher First Harmonic (the fundamental frequency), and upon that you can build the second, third harmonics (meaning using the frequency range – the bandwidth)…

So what am I missing here?

It is shown in the table that if I have a higher fundamental frequency, I will be able to send more data in less time, but then also Tennenbaum states that the information that can be carried only depends on the bandwidth?

Best Answer

From a theoretical point of view, the maximum capacity of a channel affected by AWGN Noise (Additive Gaussian Gaussian Noise) is determined by the Shannon–Hartley theorem:

$$ C\leq log_2(1+\frac{S}{N_0B})$$

This means you can't put more than that information on a channel with a band (B= \$f_{MAX}-f_{MIN}\$) without making the communication unreliable.

Then we go on the modulations: every modulation has a particular spectrum efficiency and an erroneous bit probability. More levels you use (QPSK vs 16-QAM, p.e.), more bit for each symbol (= more efficiency) but more erroneous symbols (similar to the bit error rate, with a Gray code).
The spectrum is directly related to the shaping impulse used by the modulation. A very common one is the raised cosine impulse (cause it has no Inter-Symbol Interference), that decreases the efficiency of a factor \$ (1+ \alpha) \$

Again we go on codes, that could give a huge gain, especially using concatenated codes like Reed-Solomon + Viterbi, using Turbo codes or LDPC.

Every effort is done to approach the Shannon capacity limit.