Sampling Signal using microcontroller

sampling

I would like to decode a signal from a radio clock source, using an off the shelf receiver, and a microprocessor.

The output of the receiver is a digital level which encodes either carrier on or carrier off from the transmitted signal.

The signal itself is encoded over a period of 1 minute with approx 2 bits per second, each bit being 100mS wide. (The signal is the NPL time signal, formally MSF Rugby)

It occured to me that the clock of my microprocessor will not be precise and that additionally the signal would be subject to noise and that my sampling approach should really take this into account.

I could…

(1) have an interrupt on a falling/rising edges

By detecting edges I could then time the width of the current value using the local clock.

(2) sample at period intervals, say every 20mS

If I sampled every 20mS I would get 5 samples per pulse

Neither of these approaches feel quite right. The first would cause a large number of interrupts if there was a burst of noise. The second seems like it could accumulate errors due to the local clock drift.

Any suggestions?

Thanks in advance

Best Answer

Assuming you want to use an interrupt to sample each n-milliseconds and that your uC runs on a much higher clock, you can tune the interrupt time-out to the signal received.

Most non-synchronised systems do use an over-sampling scheme between 8 and 32 samples per bit. Such as many hardware UART implementations.

I do not know what the signal looks like, but if there is a known bit pattern, such as a start code somewhere, you can start sampling at the first edge of that start condition, then if the number of samples high or low that you expect come in + or - 20%, you assume it's the start condition and adjust your time-out for the offset. The more measurements you do the more accurate your tuning will be.

However, this is sensitive to start-conditions not also happening in an on-going datastream. If any transmission always contains bits the size of 100ms, it becomes easy again. Choose high or low and when that level starts, you start counting.

Let's say you have 20 counts per bit (5ms time-out), if you count 38 counts of low (as per your choice), you know your clock won't be off by 45%, so you assume you saw 2 low bits and your clock is running fast by 2/40 => 5% and you adjust the interrupt or clock prescaler accordingly. Usually clock drifts slowly, so this way you will always know how to decode the bits (since you are oversampling with a large margin) and you can tune your decoder to the signal continuously.

This is in effect a sort of Soft-PLL, running only on the assumption that your uC clock is accurate to at least 20% (which most recent uC RC's are factory default over the full VCC range) and that your incoming signal knows best.

EDIT: If your module drives low and high and into a digital input, the noise received in the transition from one bit to the other is technically never more than one sample in a signal this low frequency. So if you use 5 samples per bit, you statistically have 4 reliable samples per bit, be off by 20%, that leaves 3. That should be fine, but it's on the edge, you might want to choose at least 8 samples, but again, the more the higher your tuning accuracy will be. It's just a trade-off between interrupts. But if your uC runs on a measly 1MHz, every 5ms is still peanuts in terms of code interruption.