Electronic – Why does radio reception consume so much energy

radioRF

On low-power radios, the current consumption for reception is similar to the consumption for transmission. For example, the Texas Instruments CC2652 System-on-Chip datasheet claims these values:

  • Active-Mode RX: 6.9 mA
  • Active-Mode TX 0 dBm: 7.3 mA

I've read an explanation that the most energy hungry component is the local oscillator, which generates the high-frequency carrier wave and needs to do that both for reception and transmission. However, it's not clear to me why would the generated sine wave need to be with similarly high amplitude in the case of reception, compared with the case of transmission. An alternative hypothesis is that running all of the (other) analog and digital RF components is what consumes the energy. Can you clear up the confusion?

Best Answer

In short: receiving is much more complicated than transmission.

You'll notice that whatever you measure in the real world is overlaid with noise.

The problem "seeing all this noise with a bit of signal in it, how do I know what the transmitter meant to transmit" is the central problem that communications engineering tries to solve.

So, to receive a couple of bits correctly, your receiver needs to:

  • receive, even if there's no signal on the air, to notice when there's signal. That means the whole receive chain, and a couple-of-megasamples-a-second ADC runs.
  • Detect something like a preamble. That usually involves a correlation. That means, for every new sample (couple of millions per second), take the most recent e.g. 2000 samples and compare them to a known sequence
  • When there actually is detection of signal, correct all influences of the channel that are bad for your type of transmission. Depending on the system, this involves:
    • Frequency correction (no two oscillators in this universe are identical. Your receiver has a different frequency than your transmitter, and that breaks basically everything that isn't very basic. You need to estimate the frequency error, which typically involves tracking phase errors, or doing statistics, and then multiplying with a synthesized sinosoid or adjusting a power-hungry oscillator)
    • Timing estimation (your sampling is not synchronous to when the transmitter transmitted a symbol. fix that. Typically involves complex multiplications, time-shifting filters or adjustable and power-hungry oscillators.)
    • Channel equalization (your signal doesn't only take the shortest path. Multiple reflections reach the receiver. If the time difference between the shortest and longest path are not negligibly small compared to a symbol duration, you need to remove the echoes. Typically, involves solving an equation with a lot of unknowns or something similar, and application of a filter, which is quadratically in complexity to channel length, at best)
    • Phase correction (your channel still might rotate the phase of your received symbol. Calls for a phase-locked loop or some other control mechanism)
  • Symbol decision (great! After all these corrections, you, if everything goes right (it almost certainly doesn't do 100%), you only got the symbol that was sent, plus noise. So, which symbol was sent? Do a guess based on a defined decision algorithm, or do a guess and say "I'm 89% percent certain")
  • Channel decoding (The transmitter didn't just transmit the data bits – it added forward error correction redundancy, which allows you to correct errors that you still make. These algorithms can be very computationally intense.)