Electrical – How is rise/fall time definition

risetime

I understand that rise/fall time are related to RC charging/discharging waveforms.

What I don't understand is how is the 0% and 100% points chosen? Do they have to be defined beforehand for a signal?

My oscilloscope will automatically find rise/fall time with 10-90 gating like the picture below for an arbitrary input. How does it automatically choose a 100% point if an RC only asymptotically approaches some value? How does it know to avoid rining at the top of the waveform?

enter image description here

Update 1: Request to check Oscope manual.

"The rise time of a signal is the time difference between the crossing of the lower threshold and the crossing of the upper threshold for a
positive- going edge. The X cursor shows the edge being measured."

My confusion is that I never set thresholds. The measurement can be done automatically by just choosing "rise time". Not sure if it just defaults to max and min of what's on screen for 0 and 100%.

My question is more than for just a particular O-scope tho. I'm trying to understand if there is a commonly accepted rigorous definition for this. Even outside an O-scope measurements. If I draw the waveform above, how do you chose the 0 and 100% points?

Best Answer

There is an accepted definition for how to calculate settling time such that the waveform has reached the final value within a desired tolerance. https://en.wikipedia.org/wiki/Settling_time

So, after the settling time, the waveform is at 100% +/- the tolerance.

In reality, of course, you take VDDD to be 100% and 0V to be 0% (or whatever is suitable for the type of output you're looking at).