Electronic – Ideal wireless channel (coherence time/bandwidth)

bandwidthidealwireless

Wireless channels can be characterized as either slow or fast fading based on their coherence time and signal length. Similarly, they are either flat fading or frequency selective depending on their coherence bandwidth in relation to the signal bandwidth.

Are some of these characteristics (or combinations) generally more desirable than others? In UMTS, spreading techniques (CDMA) are used to artificially increase the bandwidth and make the channel frequency selective. Thus, wanting to achieve frequency redundancy.

On the other hand, LTE and Wifi use OFDM(A) to create many narrowband channels/subcarriers wich are flat fading. However, in these cases it seems important that the channel is also time selective or fast fading to allow for forward error correction.

Does this mean that generally frequency and time selective channels are more desirable and the channel should be at least frequency or time selective? Or does it depend on something else (what?), such that different characteristics are more beneficial in different scenarios?

Best Answer

It is always undesirable to have a fading channel, whether fast, slow, frequency selective, or whatever.

But we don't get to choose how the channel behaves, so have to design coding schemes to overcome the fading. The better the channel, the less forward error correction is needed, either reduction in payload, complexity, latency, it's all bad and is avoided if possible.

Multipath fading causes frequency selectivity. If the channel is very narrow, the receiver will only see a small width of the fade, and it will appear flat. If the channel is wide, then the receiver will see a significant variation in the signal strength across the channel.

Schemes with narrow channels suffer from frequency tracking for Doppler which limits mobility, and potentially long drop-outs due to slow fades. Hence most systems designed in the last decade or two are relatively wide.

If portions of a channel are known to work well, with other portions being poor, but it's not known a priori which portions, then a good coding scheme will spread redundancy across the channel, so that whatever gets lost can be reconstructed.

In the case of a CD recording channel, which is very good until there is a scratch, the coding scheme seeks to spread the redundant data out in time, so there is enough good data to reconstruct the 100s of bits lost during a scratch.

OFDM will seek to spread the redundant data apart in frequency, across the channel, and in time, to combat both impulse noise and frequency selective fading. Multiple receive antennae are used to combat slow fading with spatial diversity.

Once the system is coded to cope with multipath, Single Frequency Networks (SFNs) can be deployed, to make much more efficient use of the spectrum for broadcast (DAB for instance). 4G radio can also use downlinks from multiple base stations on the same frequency to exploit the same.

Having said that lot, don't get too hung-up on the technology. Many (too many) of the decisions about what gets deployed are made for commercial politics.

Organisations like ETSI etc are made up of manufacturers who have lots of patents for bits of the technology. The rather bizarre word for how they work is coopertition. That is, they cooperate to create the global specification (they have to, you can't make a market from 10 different non-interoperable systems), and then compete with each other to make money from it. It's a wonder the process works at all.

During the standards setting process, company A wants to use their hyperbligual multiplex patent. Company B says OK, but only if it's modified with our left-handed frequency channeliser. And both get written into the standard, even if just one of them would deliver the benefit. So the basics, channel width, OFDM, are reasonable. The details, OMG, nightmare, ice-pack on the forehead needed to read the standards. But free-market commerce hasn't come up with a better way to do it.

Related Topic