Electronic – Signal rate, data rate and bandwidth in digital signals

bandwidthcommunicationsignal

I'm trying to understand the concept of signal rate and the relation between signal (baud) rate and bandwidth of digital signals from a book about data communication.

First, the book distinguishes between data element and signal element: a data element is the smallest entity that can represent a piece of information (the bit), and a signal element is the shortest unit (timewise) of a digital signal.

Then, it says that the relationship between data rate and signal rate depends on the number of data elements carried by each signal element, and on the data pattern, stating that, if we have a data pattern of all 1s or all 0s, the signal rate may be different from a data pattern of alternating 1s and 0s. It then formulates the relationship between data rate and signal rate as:

$$ S=\frac{cN} {r}\ baud $$

where N is the data rate (bps); c is the case factor, which varies for each case; S is the number of signal elements; and r is the number of data elements carried by each signal element.

Then it states that the minimum bandwidth (range of frequencies) required for a digital signal can be given by:

$$ B_{min}=\frac{cN} {r} $$

My questions are:

1) Could anyone explain in more detail the meaning of the case factor c? In practice, what does it mean to say that the signal rate depends on the data pattern?

2) Why does the minimum bandwidth for a digital signal equal the signal rate?

3) If we set c to 1/2 in the formula for the minimum bandwidth to find Nmax (the maximum data rate for a channel with bandwidth B), and consider r to be log2(L) (where L is the number of signal levels), we get Nyquist formula. Why? What is the meaning of setting c to 1/2?

Here is a link to the portion of the book where the term c is defined.

Best Answer

Could anyone explain in more detail the meaning of the case factor c? In practice, what does it mean to say that the signal rate depends on the data pattern?

The explanation in the text isn't very clear, and this term is not used in other texts I know of. I think what it's saying is that different messages might produce different signal spectra. For example, in a an 2-level FSK system, a message composed of all 1's or all 0's would just be single tone, and have a very narrow bandwidth; while a message composed of alternating 1's and 0's would contain both the one-level tone and the zero-level tone (as well as a spread of frequency content related to switching between them) and produce a broader spectrum if measured on a spectrum analyzer.

Why does the minimum bandwidth for a digital signal equal the signal rate?

This is not correct. The minimum bandwidth for a digital signal is given by the Shannon-Hartley theorem,

\$ C = B\log_2\left(1+\frac{S}{N}\right)\$

Turned around,

\$B = \frac{C}{\log_2\left(1+{S}/{N}\right)}\$.

Approaching this bandwidth minimum depends on making engineering trade offs between encoding scheme (which would relate to the number of bits per symbol), equalization, and error correcting codes (actually sending extra symbols to include redundant information that allows recovering the signal even if a transmission error occurs).

A typical rule of thumb used for on-off coding in my industry (fiber optics) is that the channel bandwidth in Hz should be at least 1/2 of the baud rate. For example, a 10 Gb/s on-off-keyed transmission requires at least 5 GHz of channel bandwidth. But that is specific to the very simple coding and equalization methods used in fiber optics.

If we set c to 1/2 in the formula for the minimum bandwidth to find Nmax (the maximum data rate for a channel with bandwidth B), and consider r to be log2(L) (where L is the number of signal levels), we get Nyquist formula. Why? What is the meaning of setting c to 1/2?

Choosing between L signal levels is equivalent to a \$\log_2(L)\$-bit digital-to-analog conversion. So it's not surprising Nyquist's formula is lurking in the shadows somewhere.