"What does it mean when a flip flop performs a function like SET, CLEAR, HOLD, or TOGGLE?"
first I'll like you to explain about latches.There is not much difference between latches and flipflop.Latches don't have clock input but flipflop have clock inputs.Here is a simple Set-Reset latch.
Explanation for SET:From the name,it suggests that,when you apply Logic HIGH to 'S'(Set),this sets the output Q to logic HIGH.The not(Q) will be it complement(logic LOW)
.Explanation for RESET:If you apply logic HIGH to 'R'(Reset),it makes the output Q to logic LOW,then not(Q) will be HIGH.(note that 'S' should be in logic LOW at that time).
Explanation for HOLD:If you apply Logic LOW to both input(S and R),The output doesn't change.I mean,it HOLDS the past output.
Explanation for TOGGLE:If you apply both pins with logic HIGH,it leads to indeterminate state for SR latch and it's the limitation of SR latch,so if you consider J-K flipflop,when you apply both J and K with Logic HIGH,it changes the output(ie. output HIGH goes to LOW and vice versa).This is called as TOGGLING.
"I start with a clock signal that goes into each flip flop. From there, I really have no clue how to trace what's going on."
Please note that,clock signal is to activate the latch circuit.A Latch with Clock pulse is called as Flipflop.Applying clock signal to activate a latch is called as Triggering.There are two types of triggering.One is edge triggering and the other is level triggering.
.In level triggering,The flipflop is ready to get the input throughout the Level of the clock continuous.In edge triggering,The flipflop will be ready to get input only when the edge of the clock is detected.Always edge triggering is preferred.
I have been thinking about this definition a lot today.
As others pointed out, the exact meanings will vary. On top of that, you will probably see more people get this wrong, even on this site, than right. I don't care what wikipedia says!
But in general:
- A flip flop will change it's output state at most once per clock cycle.
- A latch will change its state as many times as the data transitions during its transparency window.
Additionally,
- A flip flop is very safe. Almost fool-proof. For this reason synthesis tools usually use flip flops. But, they are slower than a latch (and use more power).
- Latches are harder to use properly. But, they are faster than flip flops (and smaller). So, custom circuit designers will often "spread the flip flop" across their digital block (a latch on either end with opposite phase) to squeeze some extra picoseconds out of a bad timing arc. This is shown at the bottom of the post.
A flip flop is most typically characterized by a master-slave topology. This is two coupled (there can be logic between), opposite phase latches back to back (sometimes in industry called L1/L2).
This means a flip flop inherently consists of two memory elements: one to hold during the low cycle and one to hold during the high cycle.
A latch is just a single memory element (SR latch, D latch, JK latch). Just because you introduce a clock to gate flow of data into the memory element does not make it a flip flop, in my opinion (although it can make it act like one: i.e. more rising edge triggered). It just makes it transparent for a specific amount of time.
Shown below is a true flip flop create from two SR latches (notice opposite phase clocks).
And another true flip-flop (this is the most common style in VLSI) from two D-latches (transmission gate style). Again notice the opposite phase clocks:
If you pulse the clock to a latch quickly enough, it starts to resemble a flip flop behavior (pulse latch). This is common in high speed datapath design because of the lesser delay from D->Out and Clk->Out, in addition to the better setup time granted (hold time also must increase, small price to pay) by transparency through the duration of the pulse. Does this make it a flip flop? Not really, but it sure looks acts like one!
However, this is much harder to guarantee to work. You must check across all process corners (fast nmos, slow pmos, high wire cap, low wire r; as an example of one) and all voltages (low voltage causes problems) that the pulse from your edge detector remains wide enough to actually open the latch and allow data in.
For your specific question, as to why it is considered a pulse latch instead of a flip flop, it is because you truly only have a single level sensitive bit storage element. Even though the pulse is narrow, it does not form a lock-and-dam system which creates a flip flop.
Here is an article describing a very similar pulse latch to your inquiry. A pertinent quote: "If the pulse clock waveform triggers a latch, the latch is synchronized with the clock similarly to edge-triggered flip-flop because the rising and falling edges of the pulse clock are almost identical in terms of timing."
EDIT
For some clarity I included a graphic of latch based design. There is a L1 latch and L2 latch with logic in between. This is a technique which can reduce delays, since a latch has lesser delay than a flip flop. The flip flop is "spread apart" and logic put in the middle. Now, you save a couple gate delays (compared to a flip flop on either end)!
Best Answer
One reason we clock flip flops so that there isn't any chaos when the outputs of flip flops are fed through some logic functions and back to their own inputs.
If a flip-flop's output is used to calculate its input, it behooves us to have orderly behavior: to prevent the flip-flop's state from changing until the output (and hence the input) is stable.
This clocking allows us to build computers, which are state machines: they have a current state, and calculate their next state based on the current state and some inputs.
For example, suppose we want to build a machine which "computes" an incrementing 4 bit count from 0000 to 1111, and then wraps around to 0000 and keeps going. We can do this by using a 4 bit register (which is a bank of four D flip-flops). The output of the register is put through a combinatorial logic function which adds 1 (a four bit adder) to produce the incremented value. This value is then simply fed back to the register. Now, whenever the clock edge arrives, the register will accept the new value which is one plus its previous value. We have an orderly, predictable behavior which steps through the binary numbers without any glitch.
Clocking behaviors are useful in other situations too. Sometimes a circuit has many inputs, which do not stabilize at the same time. If the output is instantaneously produced from the inputs, then it will be chaotic until the inputs stabilize. If we do not want the other circuits which depend on the output to see the chaos, we make the circuit clocked. We allow a generous amount of time for the inputs to settle and then we indicate to the circuit to accept the values.
Clocking is also inherently part of the semantics of some kinds of flip flops. A D flip flop cannot be defined without a clock input. Without a clock input, it will either ignore its D input (useless!), or simply copy the input at all times (not a flip-flop!) An RS flip-flop doesn't have a clock, but it uses two inputs to control the state which allows the inputs to be "self clocking": i.e. to be the inputs, as well as the triggers for the state change. All flip flops need some combination of inputs which programs their state, and some combination of inputs lets them maintain their state. If all combinations of inputs trigger programming, or if all combinations of inputs are ignored (state is maintained), that is not useful. Now what is a clock? A clock is a special, dedicated input which distinguishes whether the other inputs are ignored, or whether they program the device. It is useful to have this as a separate input, rather than for it to be encoded among multiple inputs.