Electronic – ‘Time step too small’ Error when simulating d-flip-flop in LTSpice

digital-logicltspicesimulation

Im just looking to simulate a 3 bit counter in LTSpice and im receiving an error during simulation:

Analysis: time step too small; time = 1.1e-009, timestep = 1.125e-019: trouble with d-flop instance a3

so the circuit im simulating looks as so:

enter image description here

and the clock/simulation parameters can be seen here:

enter image description here

where the voltage source parameters are as follows:

enter image description here

could anyone possibly give me some pointers as to why I might be receiving such errors and how it may be fixed? Thanks.

Best Answer

You need to provide a delay for the dflop, through the parameter td. The reason is that the state at the output and at the input coincide without any delay, and (quote from the manual, LTspice > Circuit Elements > A. Special Functions):

The gates and Schmitt trigger devices supply no timestep information to the simulation engine by default. That is, they don't look when they are about to change state and make sure there's a timestep close to either side of the state change.

Which happens is that if the output changes, it must be because the input has changed state. But if the output and the input are tied directly, and there is no delay between the two states, then the solver sees a simultaneous change at the output and at the input, but that can't be since the output can only change if the input has changed.

The solver then tries to reduce the timestep to detect what happened that made both the input and the output change states seemingly at the same time. And it will continue to reduce the timestep, but because both states are reduced to one, due to the direct connection, no matter how much it reduces it, it can't separate the two states. When the timestep has become too small to reduce, it complains.

The solution is very simple: add td=1...100n, td=10n is a good enough value. Don't hesitate to add it to the other gates, too. If a delay is present, then the output will change state only after td seconds, which means the solver has time to see a change happening at both states, but separately, in a way that makes sense. And, if you think about it, in real life there is always some amount of delay, no change happens instantaneously (thank goodness for causality).

Also, there's no need to set the trise/tfall of the source to 1 millionth times smaller than the period; 100...1000 times is enough, unless your requirements are specific (which I doubt).

In addition to td, there are also other temporal parameters that can only help in the long run. Two of them, tau and tripdt are ones that I warmly recommend to anyone. For this case, tau=10n tripdt=10n would help you just fine. What these do is they force the solver to only reduce its timestep if there is a change at the output that happens in less than tripdt seconds. tau forces a 1st order RC time constant of 10 ns, so for tripdt seconds, the engine will slow down, compute the output, then return to a large timestep. This helps preserve sharp edges, but smooth enough to avoid hiccups in the derivative, while also being very fast elsewhere.