The problem is that you are using a MEMS digital accelerometer, and what you are reading is the SCK (serial clock) pin of the serial interface. In order to function, that sensor needs to be interfaced with a microcontroller, that sets it for the sampling frequency, the range and so forth.
So you don't have to expect a square wave with 100Hz frequency, but a fast (depending on the bus bitrate) spike, corresponding to a transmission. Expanding the spike, if the scope is fast enough, you should then see the clock square wave inside the spike.
Moreover, if you don't set the SPI interface correctly, the uC will not generate the clock (the sensor operates in slave mode), and you won't read any value.
If you want to see a 100Hz signal, you could probe the Int pin, which sends an interrupt to the microcontroller every time a measure is available. Then, if you handle the interrupt from the microcontroller properly, you wil see the pulse corresponding to the transmission every 10 ms (100Hz).
But make sure that you're not using motion detection; in that case, only when an acceleration is measured, it will generate the interrupt.
To read the data at the SPI port, the simplest thing is to configure the communication with the sensor; otherwise, it won't send data at all. Then, check if the microcontroller is getting the interrupts and if it's reading the data the sensor gives; you can use a timer to add a timestamp to values and check the frequency they come.
(still WIP)
If you're designing a circuit that measures some quantity, you'll ideally want to relate that measurement back to some absolute physical quantity. This is how NIST and other national standards bodies create the "standards" used to calibrate laboratory test equipment.
The details of how you might do this depend dramatically on exactly what it is your circuit is measuring.
To give an example, one of the conceptually simplest cases would be if you developed a timer circuit that is meant to accurately produce an output pulse once per second. You could relate the accuracy of your timer circuit to the actual definition of the second: "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom". That is, you could use an atomic clock built around a cesium cell to produce a 9.192631770 GHz reference signal, then divide that clock down to produce an absolute 1 Hz output signal that you could compare to your new circuit's output.
Most likely, you would keep your atomic clock in very highly controlled environmental conditions, while adjusting the temperature, humidity, etc., around your new circuit over the ranges you want it to operate under, to show that its performance is insensitive to those environmental influences.
Your example of a new ADC or DAC circuit is more challenging, because a "standard volt" is not easily produced, but must be related to fundamental physical measurements of time and current using the Josephson effect.
Finally, in the worst case, that you are in fact producing a device of the accuracy of the national standards themselves, what you would have to do is convince someone else to independently produce a similar circuit, and then compare the two new devices to be able to put some limits on the errors in either one. My understanding is that several national standards bodies are in fact in the process of doing just that to develop the watt balance as a reference standard for the measurement of mass.
Best Answer
This is not true. We may not have time-domain that can display faster signals, or direct digital synthesis methods capable of generating signals at these frequencies, but we have been able to generate signals in the hundreds of gigahertz for decades.
The LeCroy 100 GHz scope is, to my knowledge, still the only one with .1 THz bandwidth (though I've heard that might change in the next few years). I believe it was first demonstrated in 2014, and then released somewhere in 2015, but don't quote me on that. In any case, that bandwidth of real-time time-domain analysis has only become available within the last decade.
But a quick google will show you people talking about sub-millimeter wavelength systems and physics (generally sub-millimeter is used to refer to signals with frequencies above 300 GHz) since the early 1900s. So, for over a century people have been working with these signals.
Through using physical concepts to generate them, such as cavity resonators etc. we can generate signals that are very high frequency. Using non-linear devices, we can make mixers that operate at 1 THz now. So if we can generate this signal, and know it is a very pure sine we can input this into our new scope and start from there.
When working at these frequencies, we very often don't work with the time-domain (so what an oscilloscope displays) but with the frequency domain (what a spectrum analyzer/network analyzer displays). In fact, I have been working with systems operating significantly above 100 GHz for a few years, but I have not used a scope with a bandwidth over 50 MHz in the last decade.
The front-ends of those scopes tend to operate in a more frequency-domain way than a time domain way - they use mixers and power dividers to cut the input signal into a number of bands (in the case of the LeCroy, I believe it is 3 bands), and then mix each of these down to DC. Then we digitize all of those, and use very complex and smart DSP to stitch them all together. Using careful characterization of the system, we can allow the analog front-end to misbehave to some extent, as we can compensate for it in the DSP (provided it misbehaves in a very predictable and repeatable way).