Electronic – master-slave flip-flops

clockflipflop

im having some trouble finding an example of the time-instability that motivates the flip-flop. wherever i look, the explanations are awfully airy-fairy (or at least my understanding is). for example, here they explain that there can be instability if the clock pulse is sampling from combinational circuits. im looking for a concrete example of such instability (i need these things explained to me like im a six-year-old kid), not just saying "things can be unstable". if i wrote in a maths exam "the given implication can be problematic" without providing a counter example i wouldnt get half a mark, and by the same token if i were authoring an article that discusses motivating flip-flops i would see it essential to "prove" their necessity.

my problem is that if i have worked out that if the [maximum] propogation delay for a given circuit, i dont need to wait for a clock pulse edge, i can just sample the output after that amount of time. meaning, even with a flip-flop i have to work out my clock-pulse speed, so why go to all the trouble of having 2 latches when i know that the desired ouput will exist after 1 pulse-length of time?

so if i had many different circuits that relied on each others output, instead of flip-flopping them all, i would just activate them in the relevant order according to a clock-pulse. im obviously missing something simple here because as far as i can see the flip-flop functioning to save from instability isnt necessary…

the JK flip-flop is a nice way to "regulate" the ouput because it solves an altogether different problem, that when j=k=1 the toggle that the JK latch would otherwise perform for the entire cycle that the clock-pulse is 1, happens only once, which means the final output is deterministic – as opposed to undefined. although that doesnt explain why its necessary on a D latch and an RS latch.

on that note, ill add a second question. whats the use of the JK latch/flip-flop? it seems to be introduced in many texts as an improvement on the RS latch, because of the problematic s=r=1 situation and resulting race condition. however if we setup the RS in a flip flop, we provide ourselves with a better alternative to the JK flip-flop: this acts like an RS latch but when r=s=1 on the clock's rising edge both Q and Q' are 0, and on the falling edge theres no change in the slave latch. so the problematic situation of r=s=1 becomes stable. the only advantage JK has over this setup is that r=s=1 actually provides a new functionality of toggle, im not sure if thats a desired thing or not…

hoping for some insight…

Best Answer

The master-slave arrangement doesn't strictly solve the metastability issue, AFAICT. It is commonly used to cross over between different clock domains of synchronous logic, but I don't quite see what improvement it does on purely asynchronous input (the slave gets a clear state, but it may be derived of a metastable transition anyway). It could simply be an incomplete description, as you could add a hysteresis function by combining the outputs of the two registers.

As for the differences between SR, JK, D or even T flip-flops, it tends to boil down to which inputs are asynchronous. The simplest SR latches do not toggle with S=R=1, but simply keep whichever state was kept last (or in the worst case, oscillate with a gate delay), that's the race. The JK, on the other hand, will transition on the clock edge - synchronous behaviour. It is thus their nature that a T register can only be synchronous, and an asynchronous D latch is transparent while latching. The SR register you describe doesn't have the T function, which can be useful depending on the function. For instance, a ripple counter can be described purely with T registers. Simply put, the JK gives you a complete set of operations (set, clear, toggle, and no-op) without costing an extra control line.

In synchronous logic, we frequently use wide sets of registers to implement a larger function. It doesn't strictly matter there if we use D, T, JK or whatever registers, as we can just redesign the logic function that drives them to include feedback (unless we need to build that logic - i.e. in 74 family logic). That's why FPGAs and such tend to have only D registers in their schematic representations. What does matter is that the register itself introduces the synchronous operation - steady state until the next clock. This allows combining plenty of side-by-side registers or ones with feedback functions.

As for the choice between delayed-pulse and clock-synchronous logic, it's not an automatic one. Some early computers (f.e. PDP-1) and even some highly energy efficient ones (f.e. GreenArrays) use the delayed-pulse design, and it is in fact comparable to a pipelined design in synchronous logic. The Carry-Save adder demonstrates the crucial difference - it's a pipelined design where you actually don't have a known value, not even intermediate, until the pulse from the last new value to enter has come out the other end. If you know at the logic design stage repeated accumulation but only the final sum is used, it may be the best choice. Meanwhile, FPGAs are typically designed with only a few clock nets and therefore do not adapt well to delayed-pulse logic (though it can be approximated with clock gating).

I hope this is more helpful than further confusing... interesting questions!