Could anyone explain in more detail the meaning of the case factor c? In practice, what does it mean to say that the signal rate depends on the data pattern?
The explanation in the text isn't very clear, and this term is not used in other texts I know of. I think what it's saying is that different messages might produce different signal spectra. For example, in a an 2-level FSK system, a message composed of all 1's or all 0's would just be single tone, and have a very narrow bandwidth; while a message composed of alternating 1's and 0's would contain both the one-level tone and the zero-level tone (as well as a spread of frequency content related to switching between them) and produce a broader spectrum if measured on a spectrum analyzer.
Why does the minimum bandwidth for a digital signal equal the signal rate?
This is not correct. The minimum bandwidth for a digital signal is given by the Shannon-Hartley theorem,
\$ C = B\log_2\left(1+\frac{S}{N}\right)\$
Turned around,
\$B = \frac{C}{\log_2\left(1+{S}/{N}\right)}\$.
Approaching this bandwidth minimum depends on making engineering trade offs between encoding scheme (which would relate to the number of bits per symbol), equalization, and error correcting codes (actually sending extra symbols to include redundant information that allows recovering the signal even if a transmission error occurs).
A typical rule of thumb used for on-off coding in my industry (fiber optics) is that the channel bandwidth in Hz should be at least 1/2 of the baud rate. For example, a 10 Gb/s on-off-keyed transmission requires at least 5 GHz of channel bandwidth. But that is specific to the very simple coding and equalization methods used in fiber optics.
If we set c to 1/2 in the formula for the minimum bandwidth to find Nmax (the maximum data rate for a channel with bandwidth B), and consider r to be log2(L) (where L is the number of signal levels), we get Nyquist formula. Why? What is the meaning of setting c to 1/2?
Choosing between L signal levels is equivalent to a \$\log_2(L)\$-bit digital-to-analog conversion. So it's not surprising Nyquist's formula is lurking in the shadows somewhere.
Data rate is the speed at which bytes (or chunks of data) are sent down a channel.
The bandwidth is how fast the bits that make up that data are transmitted.
Sampling rate is the frequency at which an incoming signal is read to measure its shape.
Take for example a typical 9600 baud serial connection.
The bandwidth is 9600 bits per second. Each byte, though, has extra bits with it (start, stop, parity, etc). So for a typical 8N1 format there's 10 bits used for every 8 bits sent.
So the data rate for 9600 baud would be 960 bytes per second.
The sampling rate would be the rate the receiver looks at the signal to see if it's a 1 or a 0 - typically at least 2x the bandwidth (see Nyquist-Shannon sampling theorem), so 19.2kHz.
Best Answer
The detector may take some time to reach 0.11 volts from 0.10 volts - this may be nano seconds, tens of nano seconds or hundreds of nano seconds etc.. Given that you have proposed no detailed design, then you have to decide what that factor is and, if you change the data too fast you may have a scenario where the DC output is slewing (say) between 0.104 volts and 0.106 volts for a continuous stream of ones and zeros.
At this point you ask yourself if any occurrence of noise could force 0.104 volts to be closer to 0.106 volts and produce an error in the received bit-stream. Then you ask yourself what bit error rate you can tolerate.