I am trying to work out what happens when you downconvert an RF signal, such as what happens in an SDR device when tuning to a given frequency. For example if a device using a Zero-IF is tuned to 399MHz, then whatever signal you see at 400MHz will appear at 1MHz where it is then digitised.

Now imagine at 400MHz you see a signal consisting of just a carrier, which switches on and off very rapidly. On for one cycle, off for one cycle, on for another cycle, off again. If you assign a binary 1 to the 'on' cycle and a binary 0 to the 'off' cycle, I believe this would allow you to transmit 400,000,000 bits per second.

Now what happens if this signal is downconverted to 1MHz ready for the SDR to digitise? If the 1MHz carrier switches on and off at a rate of one cycle at a time, there will only be 1,000,000 transitions, although the original signal had 400,000,000 transitions in the same time period.

So what happens in this case? Does the 1MHz carrier cycle on and off at the original 400MHz frequency? Does that allow you to transmit your original 400,000,000 bits per second on a 1MHz carrier frequency? Or are the extra cycles lost somehow? What would the resulting signal at 1MHz look like?

## Best Answer

The thing to realize here is that if you take a sinusoidal carrier and switch it on and off, change its amplitude, frequency, or modulate it in any way, then it can be shown mathematically, but somewhat counter-intuitively, that what you are doing is introducing sinusoidal components at other frequencies. In fact,

anyperiodic waveform can be represented as a sum of sine waves. Take for example, the square wave here:The mathematical tool that allows this transformation is the Fourier transform. Here in the case of the square wave we can see it is made of the fundamental frequency, plus all of its odd harmonics. Even if the signal we care about isn't strictly periodic (they usually aren't), we can pick some segment of the signal that

isperiodic, or mostly so, and analyze that.Similarly, your example of switching a carrier on and off also introduces higher frequency components than your carrier. In fact, any rapid departure from a perfect sine wave creates high-frequency components. This explains how information is not lost: these high-frequency components are also down-converted and detectable by your SDR, provided it has sufficient bandwidth to see them all.

It also explains why this modulation scheme is not used in practice: each switch on and off would create a lot of noise far away from the carrier spectrum. In fact, this might be one of the oldest modulation problems in radio: CW (the usual way to modulate Morse code, simply switching a carrier on and off) is exactly what you describe, albeit at a much slower rate. While it would be conceptually simplest to switch the carrier hard on and hard off, this creates what's called "key clicks", undesirable interference on other frequencies, as well as an audible "click" resulting from those high-frequency components being converted down to audio frequencies. Consequently, the carrier is actually slowly tapered on and tapered off to reduce the bandwidth occupied by the signal. The tapering is fast enough it's not perceived by the listener as a taper, but slow enough that the high-frequency components are negligible compared to the carrier.