Actually, the structure of your verilog looks just fine. It's the details that are wrong, in a lot of places. Here are some:
reg [3:1] y2, y1, Y2, Y1
Yes, the missing semicolon throws the compiler off. More to the point, [3:1]
tells it that each of these regs is three bits wide, but they're only one bit wide in the planning. Traditionally we use 0 for the least significant bit (such that the binary interpretation has weight 2^n at bit n). The parameter line would thus be [1:0]
, as it is you extend it to three bits wide.
always @(w,y2,y1)
always @(negedge Resetn, posedge Clock)
The sensitivity list is separated by or
, not comma.
case (y2)
A: if (w) Y2 = 0;
Y1 = 1;
else Y2 = 0;
Y1 = 0;
y2
in this case statement only matches one bit in your planning (three in the code, but that made less sense). You can concatenate bits using {y2,y1}
; in fact, extending the case to case ({y2,y1,w})
will let you use case matches like {A,1'b0}:
and remove the if
statements entirely.
Secondly, you are trying to manage groups of statements (both assignments to Y2 and Y1) with if
; doing so requires enclosing them with begin
and end
. Alternatively, you could make a wider assignment such as {Y2,Y1} <= B;
, which ends up more readable as it can use your named states.
Thirdly, assignment using =
can cause some confusion (it acts more like sequential languages, while <=
doesn't modify the meaning of a reg within your always). In this case, it is fine as the block is fully combinatorial and does not depend on its own outputs.
Finally (for the case
section), you can simply add more matches. You don't even need a default
match, but it's probably convenient to use default
to go to state A in this case.
always @(negedge Resetn, posedge Clock)
if (Resetn == 0) //something :/
else //something else :/
Something and something else would be register updates, such as {y2,y1} <= {Y2,Y1};
. It is the clock edge sensitivity that turns the regs into flipflops.
Finally, since you should now understand what defines a reg width, why don't you make two bit wide regs named state
and next_state
to replace {y2,y1}
and {Y2,Y1}
respectively?
always @(*) begin
This means whenever any variable that appears on a right-hand side in the block changes, run the block.
equals = equals + 1;
This changes the variable equals
.
So whenever equals
changes, you increment equals
. Which means equals
changes, which means you increment it again, and so on.
So basically, equals
just keeps incrementing as fast as the hardware can make it happen.
If you output this to a 7-segment display, you will just see the superposition of '0', '1', '2', and '3', flashing as quickly as the hardware can go. This will be much faster than your eye can follow. I don't know what it happens to look like, but if you say it looks like a '2', I believe you.
The usual way to do what you seem to want is to make equals
increment only when something special happens, like the edge of a slow-ish clock (like maybe 5 Hz at most for the display to be meaningful to the human eye); or only when some special event happens that you're trying to count.
Edit
I should also add that, because of race conditions, if you implement this design in real hardware, it's likely that the output doesn't actually transition through all the states 0, 1, 2, 3, as expected, or that it doesn't do it in the order you expect.
For example if you're in the 2'b01 state, you expect to transition to 2'b10. But the signal to change bit 0 might propagate through more quickly than the signal to change bit 1, resulting in a glitch to the 2'b00 state. If that glitch lasts long enough, the circuit might go from there to the 2'b01 state again instead of to 2'b10. But that's just an example. What really happens depends on the transistor-level and wire-geometry level details of how the circuit is built.
Best Answer
The approach in the example is a perfectly reasonable way of designing state machines. It's also the approach I tend to stick with for all of my designs, including some pretty darn large state machines in big systems.
The thing to remember about this approach though is that everything happens with a one-cycle latency from the state. To explain what I mean, lets look at the example you gave:
When we are in state
S0
, theL
,D
, andstate
registers all get updated. However because it is a clocked process, the values don't change immediately when we enterS0
. Instead they change the clock cycle afterward. This means that taking this example,L
andD
will go to 0 when we enter stateS1
. Any logic that uses thestate
register should be aware of this when doing any calculations. It's also something to bear in mind when analysing simulation output.Beyond that there are practical upshots to this design.
All of the outputs are registered, which means anything using them down the line doesn't have to contend with a cloud of combinational logic. This is as opposed to state machines in which the outputs are asynchronously dependent on the state machine registers which will result in a large combinational cloud that can cause timing problems in high speed designs. That's where the 1 cycle latency in this approach comes from.
I find it much clearer to follow because you have all of the logic in one place, following in a high level state by state layout. This is unlike ones which split the design into two always blocks - an asynchronous one for logic, and a synchronous one for the next state.
TL;DR Basically you can use this design approach very successfully, as long as you remember and can cope with the 1 cycle latency.