You can work it out by going through the steps.
We know if T is at 1 then Q will toggle, if it's at 0 Q will stay the same.
Say we start off with Q0, Q1 and Q2 at 0, and the multiplexer input is set to 0 (so basically ignore the Qbar paths)
At the first clock pulse, since T0 is at 1, then Q0 will toggle from 0 to 1.
So we now have 1 0 0
At the second clock, since T1 is at 1, Q1 will toggle from 0 to 1. Q0 will toggle back to 0.
Now we have 0 1 0
At the third clock, Q1 will remain at 1 since T1 is at 0. Q0 will toggle back to 1.
Now we have 1 1 0
At the fourth clock, since the and gate now has both inputs (Q0 and Q1) at 1, then T2 will be at 1 therefore Q2 will toggle from 0 to 1. T1 is at one so Q1 will toggle back to 0, as will Q0.
Now we have 0 0 1
At the fifth clock, T2 = 0 so Q2 will remain at 1, T1 = 0 so Q1 stays at 0, and Q0 always toggles so it will change to 1.
Now we have 1 0 1
At the sixth clock, T2 = 0 so Q2 stays at 1, T1 = 1 so Q1 toggles to 1 and Q0 toggles to 0.
Now we have 0 1 1
At the seventh clock, T2 = 0 so Q2 stays at 1, T1 = 0 so Q1 stays at 1, and Q0 toggles to 1.
Now we have 1 1 1
At the eighth clock, both inputs to the and gate are high so T2 = 1, so Q2 toggles to 0. T1 is at 1 so Q1 toggles to 0, and Q0 toggles to 0.
Now we have 0 0 0 which is where we started.
If you change the multiplexer input to 1 and select the Qbar path, you can see how it would count down (just complement all the above results, e.g. 0 0 0 becomes 1 1 1, 1 0 0 becomes 0 1 1 and so on)
One reason we clock flip flops so that there isn't any chaos when the outputs of flip flops are fed through some logic functions and back to their own inputs.
If a flip-flop's output is used to calculate its input, it behooves us to have orderly behavior: to prevent the flip-flop's state from changing until the output (and hence the input) is stable.
This clocking allows us to build computers, which are state machines: they have a current state, and calculate their next state based on the current state and some inputs.
For example, suppose we want to build a machine which "computes" an incrementing 4 bit count from 0000 to 1111, and then wraps around to 0000 and keeps going. We can do this by using a 4 bit register (which is a bank of four D flip-flops). The output of the register is put through a combinatorial logic function which adds 1 (a four bit adder) to produce the incremented value. This value is then simply fed back to the register. Now, whenever the clock edge arrives, the register will accept the new value which is one plus its previous value. We have an orderly, predictable behavior which steps through the binary numbers without any glitch.
Clocking behaviors are useful in other situations too. Sometimes a circuit has many inputs, which do not stabilize at the same time. If the output is instantaneously produced from the inputs, then it will be chaotic until the inputs stabilize. If we do not want the other circuits which depend on the output to see the chaos, we make the circuit clocked. We allow a generous amount of time for the inputs to settle and then we indicate to the circuit to accept the values.
Clocking is also inherently part of the semantics of some kinds of flip flops.
A D flip flop cannot be defined without a clock input. Without a clock input, it will either ignore its D input (useless!), or simply copy the input at all times (not a flip-flop!) An RS flip-flop doesn't have a clock, but it uses two inputs to control the state which allows the inputs to be "self clocking": i.e. to be the inputs, as well as the triggers for the state change. All flip flops need some combination of inputs which programs their state, and some combination of inputs lets them maintain their state. If all combinations of inputs trigger programming, or if all combinations of inputs are ignored (state is maintained), that is not useful. Now what is a clock? A clock is a special, dedicated input which distinguishes whether the other inputs are ignored, or whether they program the device. It is useful to have this as a separate input, rather than for it to be encoded among multiple inputs.
Best Answer
Metastability cannot be 'cured', but if you wait long enough, the likelihood of it occurring can be made arbitrarily small. Once you've got it down to once in the age of the universe, it's probably unlikely to cause you trouble.
It's like balancing a pencil on its point. It's likely to fall over, and the longer you wait, the less likely it is to remain standing.
There are two problems with waiting a long time, and one of them is fundamental.
The fundamental problem is that if you have a single memory element (latch or flip-flop, they both suffer from metastability) in a clocked system receiving the output from an asynchronous external system, then you physically cannot define a lower limit to the waiting time, sometimes the external signal will make a transition near the latching control edge. You have to pipeline the signal to another flip-flop to let it wait there. This gives you a guaranteed one clock cycle minimum wait time.
The second problem is that often you're trying to run a system as fast as possible, and the system clock rate cannot be slowed down to give enough time in the second flip-flop. The only way to increase the signal latency to what's necessary, without decreasing the throughput, is to pipeline the waiting to more stages.
Some people have trouble visualising what's happening between the flip-flops. There are two ways to induce metastability, and they both involve violating the flip-flop rules. One way is to violate the input setup and hold times, to make a transition when the flip-flop expects the input to be stable. The other is to violate the input logic levels, to make the flip-flop data input sit at an intermediate voltage level. A flip-flop that's being metastable can produce either type of violation on its output, to cascade on to the next flip-flop.