Electronic – oscilloscope performance at slow <1 Hz signals

displayoscilloscope

My question needs a little background, I apologize for this in advance.

I have slowly changing signals typically in the 0.1 – 1 Hz which I want to observe on a scope. I'm measuring things like slowly-changing temperatures.

When making any change on the scope front panel while observing slow signals here are the 3 types of scope responses I've observed:

  1. The scope screen freezes for 6-12 seconds before it "comes alive" again and shows the moving trace. I have observed this behavior in a Rigol 1054Z and Keysight DSO-1002A.

  2. Scope clears the screen and immediately starts a new trace from the edge of the screen. But no 6-12 second dead time. I see this behavior in an Instek GDS-2104E and an Instek GDS-1054B.

  3. Scope immediately performs the front panel setting you made and simply carries on with the display at the new setting: No 6-12 second dead time, no blanking and restart of the trace. Just keeps going. This is the best behavior of course, and I see this on a Tek DPO-3034 and Tek TDS-2012.

My Question is: How do I refer to this behavior, that is, what do I call it? If I'm talking with an apps engineer at an instrument company and I want to know if their scope handles this scenario (slow signal display) according to options 1, 2, or 3 above, what do I ask??

I've searched here and on oscilloscope forums and I don't see this feature of scope operation addressed. Is this addressed in the spec sheet for a scope? What do I look for? I know most folks don't care much about how well a scope goes slow; we want them to go fast, right? 😉 But I would appreciate if anyone could point me to where I can find some answers…

Thanks all, and again sorry for the long-windedness.

Best Answer

What's happening in #1 is that the scope is acquiring data on the left hand side of the screen, then switches to real-time (ish) plotting on the right hand side of the screen.

When you set a trigger point on the oscilloscope, you are basically saying that you want the trigger event to be at the middle of the oscilloscope screen. So, that means the scope has to acquire half of the data before the trigger event occurs. What that means, is that the scope has to have a continuous buffer that has to store info in case a trigger event pops up afterwards. So, it uses a rotational buffer to make sure that it has the data it needs when it needs it.

So, the "dead time" is going to be the duration of the left half of your screen. At 1 second/division on a Keysight DSOX1002A you'll have a delay of 5 seconds while the scope captures buffer data. Then (in auto trigger mode), the scope will automatically trigger and plot out the second half of the data.

2: For the Instek, it sounds like they are simply saying "forget it, we haven't found a valid trigger, grab new data. And, if a valid trigger event occurs we'll maybe catch it next time" The Keysight (and other rotational buffer based oscilloscopes) will give you the trigger where you want it if it happens to pop up in the 1st half of that data collection. Instek won't read it as a trigger (to my knowledge).

3: Tek has an interesting philosophy on this. From a Keysight standpoint, I appreciate that what you see is what actually happens. For example, if I change the V/div setting, I don't want to see old signal on screen that was captured at a different V/div setting. It's a philosophy thing more than anything. In fact, you'll see this for acquisition modes too. For example, if you are in High-res mode and change a V/div or time/div setting, the oscilloscope will actually re-run the high-resolution plotting algorithm on the same data. It's done in hardware, so there's not noticeable delay. Tek, on the other hand, will just keep the old plot because their plotting is a software/processor thing and can take a while under certain circumstances.

I hope that all makes sense! I highly recommend trying roll mode, no delay!