Data encoding for GFSK wireless

digital-communicationsRFwireless

We are designing a RF link (2GFSK) using the CC1125, to replace and older system.

The older system uses NRZI data encoding with bit stuffing.

Bit stuffing is used to remove the present of stop flag (0x7E) data and to avoid having too many 1's in a sequence.

An older RF engineer told me that bit stuffing where important, not only for synchronization, but also because the receiver adjust to the center of the two frequencies, to acheive optimum reception. It is therefor sensitive to a long sequence of equal bits, as it would track away from center and towards the frequency f0 or f1, depending on the sequence.

But, in the case of CC1125, or any orher RF chip, how do I calculate the effect on receiver sensitivity (BER) due to a long sequence of equal bits.

Example: If we send 24-bit preamble, with 32 bytes payload, how do I calculate the maximum number of allowed equal bits, that would cause some precent error. ?

Best Answer

Here's a non-mathematical approach to the problem using FSK as the example. My intent is to show that several factors contribute to being able to calculate an answer (which I don't intend to do).

Think about a simple data slicer sat after a simple FM demodulator: -

enter image description here

Next, imagine the receiver isn't locked into any valid transmission but after a little while along comes a TX preamble: -

enter image description here

Before the TX preamble, the data slicer is just receiving random noise from the demodulator and it's trying to make sense of that random noise because it's not a very clever circuit.

The blue line is the data slicer trying to track a potential FSK signal and if the demodulator has a bandwidth of several MHz the blue line can be sat several MHz away from where it should be when a proper transmission eventually comes along. OK so far?

So, along comes the TX preamble and that TX preamble has to be long enough to drag the data slicer's filter (blue line) from one extreme of the demodulators output, to the precise centre frequency of the transmission. That's it's whole purpose in life.

Are you able to see that in the diagram?

The data slicer above uses a simple RC low pass filter that has a 3dB point at a much lower frequency than the maximum data rate. It has to be like this or, when a bunch of zeros or ones come along in the transmission, the filter will migrate towards one side of the data and eventually there will be corruption.

So there are several factors: -

  • How wide might be the demodulators frequency range compared to how tight the bandwidth is of the transmission?
  • How long is the preamble in order to align the data recovery circuit with the transmission centre frequency?
  • What type of filters (1st order, 2nd order etc.) are used to align the data recovery?
  • How much noise is there - in other words how far from the precise centre of the FSK bitstream can the data recovery circuit's estimation be before noise corrupts?
  • How clever is the data recovery system at adapting its filters (once locked onto a preamble) so that drift away from the precise centre frequency (due to continuous runs of zeros or ones) is slowed down. This can make a massive difference of course - intelligence in this area is fundamental to reducing preamble length whist living with extensive runs of no data transition.

This was a simple example of FSK.

Related Topic