Electronic – Why does timing change in dataframe using Manchester encoding

communicationmanchester-codingreverse-engineeringRF

I have successfully received a couple data frames from an RF transmitter. I compared couple consecutive frames and the timing deviation between them is marginal, so I am convinced it is reasonably accurate to a 50µs resolution.

I think it is using Manchester encoding and one of the properties of Manchester encoding is the ability to retrieve the clock signal from it. Thus I would expect that the signal's preamble obeys the clock of the data.

enter image description here

(click to enlarge image)

Apart from the front porch (-5.5 ~ 0 ms) I would expect preamble (-38 ~ -5.5 ms) obeys the same clock as the data (starting from 0), but it doesn't. The preamble clock is approximately 10% faster than the data clock.

What is the reasoning behind the change in clocks? Isn't that in contradiction to the Manchester clock retrieval properties?

The receiver end is a TDA-5200 combined with a protected ATmega32L, the internals of the transmitter are probably similar but currently unknown.

Best Answer

Most receivers have data slicers that adjust dynamically to the incoming signal strength. As a result, you always get a bit stream, even when there is no transmitted signal to slice. In that case, the data slicer is just slicing noise.

One way or another, the receiver has to be able to identify the start of a manchester message, even though 1s and 0s are being continuously spit out of the data slicer. There are a number of schemes, but generally you require every message to be preceeded by something that is identifiable but guaranteed not to be contained anywhere within a valid message. This something is generally referred to as the preamble.

The decoder is always looking for this special preamble signature, whether in the middle of decoding a message or not. Detecting the preamble even when the decoder thinks it's deep in a message is important for two reasons. It may be in the message erroneously, and it could be in a weakly received message that is getting stomped on by a new strongly received message. In the latter case, the orginal message can't ever be decoded anyway. The best you can do is not have the original partial message distract you from decoding the later message that you can decode.

There are many preamble schemes. Apparently in this case they deliberately used a different clock frequency so that it will be detected as a invalid message after a few cycles. That's one valid way of doing things.

I usually use the same clock but a long sequence of successive long levels, which would be 000000.... or 111111... within a real message. However, I use a bit-stuffing scheme in the body of the message so that more than a pre-determined number of consecutive long levels isn't possible. For example, if the bit-stuffing rules allow at most 7 consecutive bits of the same value, then there can be at most 14 consecutive long levels within a valid message. My preamble deliberately violates this rule. As soon as the decoder sees the 15th consecutive long level, it aborts whatever logic it was in and goes into the preamble detected state, waiting for a start bit of the correct polarity.

Added about data slicing

Data slicing refers to interpreting the incoming analog signal into a digital signal. Ideally, the analog signal coming out of the raw radio receiver looks like a digital signal already, but that doesn't happen in reality for a variety of reasons. Even if it did, the amplitude of that signal would be quite dependent on distance from the transmitter, just to list one variable. As a result, the raw demondulated radio signal can't be interpreted directly by a digital logic gate. The process of going from the received analog signal to a true digital signal is called data slicing.

Old analog data slicers were little more than a comparator with the received signal on one input and a low pass filtered version of that signal on the other. The low pass filter frequency was set low enough to not react much to individual bits, but find the DC average over a few bits. This was then used as the signal's average half-way level to decide whether the instaneous signal was high or low.

One reason manchester encoding is popular with these kinds of signals is that every bit is high half the time and low half the time, averaging to the middle level over every bit. Still, the analog data slicers needed a few bits to settle properly and start producing the correct digital signal after a large change in the level of the received signal. This is yet another reason for the preamble, which is to give the data slicer time to settle.

Nowadays with microcontrollers readily available and probably decoding the data sliced signal anyway, the raw demodulated signal can be fed directly into the micro and the data slicing done in firmware. This allows for easily employing non-linear operations in the data slicer that would be difficult in analog hardware.

One scheme that I have used a few times is to sample the analog received signal around 10x per manchester bit. I keep a buffer of just a little more than one bit's worth of samples, find their min and max, and use the average of those as the slicing threshold. Since a high or low level never lasts more than 1 bit time in a manchester stream, this guarantees both a high and low are in the buffer when it matters. One advantage of this scheme is much faster settling of the data slicer than if a analog low pass filter were used.

Often it helps to apply maybe two poles of low pass filtering to the stream of A/D readings before any other processing. This helps reduce random noise and a little bit of the quantization noise. The filter should settle to around 80-90% in 1/2 bit time.

The above is done in the A/D interrupt routine. After slicing, the A/D interrupt can then push the result onto a FIFO drained by the decoder running in foreground, or it can classify each level as long, short, or invalid, and push that onto a FIFO for the decoder to handle.

I have implemented this algorithm on a dsPIC with a 12 bit A/D decoding a 10 kbit/s manchester stream. It worked so well that it correctly decoded whole packets where the high/low amplitude was only a few LSB. I couldn't make out bits on the scope, but the digital data slicer picked them up anyway, and the decoder was able to decode the packet. The packet contained a 20 bit CRC checksum, which is how I know the decoder decoded the packet correctly.

Related Topic