I understand that rise/fall time are related to RC charging/discharging waveforms.
What I don't understand is how is the 0% and 100% points chosen? Do they have to be defined beforehand for a signal?
My oscilloscope will automatically find rise/fall time with 10-90 gating like the picture below for an arbitrary input. How does it automatically choose a 100% point if an RC only asymptotically approaches some value? How does it know to avoid rining at the top of the waveform?
Update 1: Request to check Oscope manual.
"The rise time of a signal is the time difference between the crossing of the lower threshold and the crossing of the upper threshold for a
positive- going edge. The X cursor shows the edge being measured."
My confusion is that I never set thresholds. The measurement can be done automatically by just choosing "rise time". Not sure if it just defaults to max and min of what's on screen for 0 and 100%.
My question is more than for just a particular O-scope tho. I'm trying to understand if there is a commonly accepted rigorous definition for this. Even outside an O-scope measurements. If I draw the waveform above, how do you chose the 0 and 100% points?