OK - I solved the problem. In my SPI setup for the microcontroller, I changed both the clock polarity and the clock phase and it solved the problem. I now have very clean, smooth curves.
I makes sense now. The 16-bit words read out from the ADC were indeed not correct and contained both information from the previous word and the current. That explains both the periodicity and weirdness in the plots.
Thankfully both the DAC and ADC seems to work fine with the new clock and phase polarity, despite being from different manufactures :-)
We can always use Nyquist theorem to decide the sampling rate. But in case of bandpass signal, undersampling (sampling rate less than Nyquist rate) can also do the job.
Let's assume we have a continuous input bandpass signal of bandwidth B. The bandpass signal is centered at \$f_c\$ Hz, and its sampled value spectrum is that shown in Figure.
We can sample that continuous signal at a rate, say \$f_{s'}\$ Hz, so the spectral replications of the positive and negative bands, P and Q, just butt up against each other exactly at zero Hz. This situation, depicted in Figure (a). With an arbitrary number of replications, say m, in the range of \$2f_c – B\$, we see that :
$$mf_{s'} = 2f_c-B\ \ \mathrm{or}\ \ f_{s'} = \frac{2f_c - B}{m}$$
Where, m can be any positive integer so long as \$f_{s'}\$ is never less than 2B.
If the sample rate \$f_{s'}\$ is increased, the original spectra (bold) do not shift, but all the replications will shift. At zero Hz, the P band will shift to the right, and the Q band will shift to the left. These replications will overlap and aliasing occurs. Thus for an arbitrary m, there is a frequency that the sample rate must not exceed, or
$$f_{s'} < \frac{2f_c - B}{m}\tag1$$
If we reduce the sample rate below the \$f_{s'}\$, the spacing between replications will decrease in the direction of the arrows in Figure (b). Again, the original spectra do not shift when the sample rate is changed. At some new sample rate \$f_{s''}\$, where \$f_{s''} < f_{s'}\$, the replication P will just butt up against the positive original spectrum centered at \$f_{c}\$ as shown in Figure (c). Decreasing sampling frequency below this will cause aliasing. So there is a lower limit given by
$$f_{s''} > \frac{2f_c + B}{m+1}\tag2$$
From equation (1) and (2),
$$ \frac{2f_c - B}{m} > f_{s} > \frac{2f_c + B}{m+1}\tag3$$
Where m is an integer and \$f_s > 2B\$.
Equation (3) gives the minimum and maximum sampling frequency so that there will be no aliasing.
Content copied from this Source.
Best Answer
Assuming you want to use an interrupt to sample each n-milliseconds and that your uC runs on a much higher clock, you can tune the interrupt time-out to the signal received.
Most non-synchronised systems do use an over-sampling scheme between 8 and 32 samples per bit. Such as many hardware UART implementations.
I do not know what the signal looks like, but if there is a known bit pattern, such as a start code somewhere, you can start sampling at the first edge of that start condition, then if the number of samples high or low that you expect come in + or - 20%, you assume it's the start condition and adjust your time-out for the offset. The more measurements you do the more accurate your tuning will be.
However, this is sensitive to start-conditions not also happening in an on-going datastream. If any transmission always contains bits the size of 100ms, it becomes easy again. Choose high or low and when that level starts, you start counting.
Let's say you have 20 counts per bit (5ms time-out), if you count 38 counts of low (as per your choice), you know your clock won't be off by 45%, so you assume you saw 2 low bits and your clock is running fast by 2/40 => 5% and you adjust the interrupt or clock prescaler accordingly. Usually clock drifts slowly, so this way you will always know how to decode the bits (since you are oversampling with a large margin) and you can tune your decoder to the signal continuously.
This is in effect a sort of Soft-PLL, running only on the assumption that your uC clock is accurate to at least 20% (which most recent uC RC's are factory default over the full VCC range) and that your incoming signal knows best.
EDIT: If your module drives low and high and into a digital input, the noise received in the transition from one bit to the other is technically never more than one sample in a signal this low frequency. So if you use 5 samples per bit, you statistically have 4 reliable samples per bit, be off by 20%, that leaves 3. That should be fine, but it's on the edge, you might want to choose at least 8 samples, but again, the more the higher your tuning accuracy will be. It's just a trade-off between interrupts. But if your uC runs on a measly 1MHz, every 5ms is still peanuts in terms of code interruption.