The problem is that you are using a MEMS digital accelerometer, and what you are reading is the SCK (serial clock) pin of the serial interface. In order to function, that sensor needs to be interfaced with a microcontroller, that sets it for the sampling frequency, the range and so forth.
So you don't have to expect a square wave with 100Hz frequency, but a fast (depending on the bus bitrate) spike, corresponding to a transmission. Expanding the spike, if the scope is fast enough, you should then see the clock square wave inside the spike.
Moreover, if you don't set the SPI interface correctly, the uC will not generate the clock (the sensor operates in slave mode), and you won't read any value.
If you want to see a 100Hz signal, you could probe the Int pin, which sends an interrupt to the microcontroller every time a measure is available. Then, if you handle the interrupt from the microcontroller properly, you wil see the pulse corresponding to the transmission every 10 ms (100Hz).
But make sure that you're not using motion detection; in that case, only when an acceleration is measured, it will generate the interrupt.
To read the data at the SPI port, the simplest thing is to configure the communication with the sensor; otherwise, it won't send data at all. Then, check if the microcontroller is getting the interrupts and if it's reading the data the sensor gives; you can use a timer to add a timestamp to values and check the frequency they come.
(still WIP)
Interesting.
I don't think I've ever seen this anomaly before.
It's often convenient to think of a SAR ADC as if it samples the input analog voltage at some instant in time.
In practice, there is a narrow window of time where changes in the input analog voltage --
or noise on the analog voltage reference, or noise on the GND or other power pins of the ADC --
can affect the output digital value.
If the input voltage is slowly rising during that window, then the less-significant bits of the SAR output will be all-ones.
If the input voltage is slowly falling during that window, then the less-significant bits of the SAR output will be all-zeros.
A very narrow noise pulse at the "wrong" time during conversion can have a similar effect.
Right now my best guess is that you're using some sort of analog switches or op amps that don't work quite as well (higher resistance or something) near the high and low power rails as they do near mid-scale, somehow letting in one of the above kinds of noise, which causes the less-significant bits to be all-ones or all-zeros.
I've seen some sigma-delta ADCs and sigma-delta DACs that have good resolution at mid-scale, but worse resolution near the rails -- but the effect looks different than what you show.
The "plot of the difference between one sample and the next sample over the entire full scale range" is fascinating.
If I were you, I would make a similar plot that, instead making the X value the difference between one sample and the next, make the X value the least-significant 6 bits of the raw ADC output sample.
That would quickly show if the "stuck" values are mostly lots of 1s in the least-significant bits (maybe input is slowly rising?) or lots of 0s in the least-significant bits (maybe input is slowly falling?).
I am sampling "pulsed" DC voltages. That means that for each
measurement I put a voltage on the DAC, let it settle for at least 100
times it's settle time, then tell the ADC to convert - and when
conversion is finished, I put the DAC back to 0 V.
My understanding is that when ADC manufacturers say "no missing codes",
the test they use involves several capacitors adding up to a huge capacitance directly connected to the ADC input,
and some system driving a large resistor connected to that capacitance that very slowly charged or discharged that capacitor,
slowly enough that the ADC is expected to see exactly "the same" voltage (within 1/2 LSB) for several conversion cycles before it sees "the next" voltage (incremented by 1 going up, decremented by 1 going down).
If I were you, I would see if such a "continuous slope" test gives the same weird "stuck code" symptoms as the "pulsed test".
Perhaps that would give more clues as to exactly what component(s) are causing this problem.
Please tell us if you ever figure out what caused these symptoms.
Best Answer
I don't think there's any preference. SPI bus never has been formally standardised and it's been around almost forty years. So pretty much every combination of these "modes" have been used by different vendors. Don't think the mode numbers always mean the same thing either.
https://www.byteparadigm.com/applications/introduction-to-i2c-and-spi-protocols/
To make things more interesting there are "nonstandard" implementations, one popular choice is adding a "ready" signal to the slave. SPI was originally designed (in 1979!) to communicate with simple devices which would have guaranteed response times. This can cause problems with more complex devices such as auxiliary microcontrollers or, say, standalone communication modules.
Throw in dual spi (can run in half-duplex mode with 2 bits transferred simultaneously) or quad spi which adds two extra data pins. There are, naturally, different incompatible versions of both.
So that's why you have these different modes, it's because there's no standard and you have to support different behaviors to maximize compatibility.