Let's say that we want to do a good job of testing this, but without going through the entire 2^32 space of possible operands. (It is not possible for such adder to have such a bug that it only affects a single combination of operands, requiring an exhaustive search of the 2^32 space, so it is inefficient to test it that way.)
If the individual adders are working correctly, and the ripple propagation between them works correctly, then it is correct.
I would giver priority to some test cases which focus on stressing the carry rippling, since the adders have been individually tested.
My first test case would be adding 1 to 1111..1111 which causes a carry out of every bit. The result should be zero, with a carry out of the highest bit.
(Every test case should be tried over both commutations: A + B and B + A, by the way.)
The next set set of test cases would be adding 1 to various "lone zero" patterns like 011...111, 1011...11, 110111..111, ..., 1111110. The presence of a zero should "eat" the carry propagation correctly at that bit position, so that all bits in the result which are lower than that position are zero, and all higher bits are 1 (and, of course, there is no final carry out of the register).
Another set of test cases would add these "lone 1" power-of-two bit patterns to various other patterns: 000...1, 0000...10, 0000...100, ..., 1000..000. For instance, if this is added to the operand 1111.1111, then all bits from that bit position to the left should clear, and all the bits below that should be unaffected.
Next, a useful test case might be to add all of the 16 powers of two (the "lone 1" vectors), as well as zero, to each of the 65536 possible values of the opposite operand (and of course, commute and repeat).
Finally, I would repeat the above two "lone 1" tests with "lone 11": all bit patterns which have 11 embedded in 0's, in all possible positions. This way we are hitting the situations that each adder is combining two 1 bits and a carry, requiring it to produce 1 and carry out 1.
Let's suppose you have a coefficient and a signal input value. If the coefficient has
\$F_C\$ fraction bits and the input has \$F_I\$ fraction bits then their product will have \$F_C + F_I \$ fraction bits. When you used 000000001 to represent the integer 1 you had implicitly set \$F_I = 0\$ so the products had the same format as the coefficients. If you use fixed-point values that are \$\ge 1.0\$ then you will need bits to the left of the binary point to represent the integer part of the value. As with the fraction bits, the number of integer bits in the product will equal the sum of the numbers of integer bits in the multiplier and multiplicand.
When you add fixed-point values they must have the same number of fraction bits (i.e. the binary point is aligned) and the sum will have the same number of fraction bits as the addends. If you don't have information about the actual range of values for the sum then you need to assume that a carry can occur, so you need an additional bit to the left of the binary point to represent the integer part of the number. That is, you need one more integer bit in the sum than the maximum number of integer bits in either of the addends.
Best Answer
If you want to find the largest number between two unsigned numbers A and B, all you need to look at is the most significant bit where the bits in A and B are not the same, the larger number is the number that will have a '1' at that point and the smaller number will be the number that will have a '0' at that point. e.g if
$$ \text{A} = 1100\\ \text{B} = 1011 $$
then A is bigger than B because at the most significant bit where the bit number's aren't the same (bit index 2 in this case) A is 1 and B is 0.
So to solve this problem, we can start by defining a variable X\$_i\$, where X\$_i\$ is high if i) the bits of A and B in the current bit index are not the same, ii) all bits of A and B have been the same in all indexes greater that the current index and iii) the bit of value of B in the current index, B\$_i\$, is high.Using this logic we can extract the values of X\$_i\$ = {X\$_3\$ X\$_2\$ X\$_1\$ X\$_0\$} as being
$$ X_3 = B_3(A_3 \mathbin{\oplus} B_3) \\ X_2 = B_2(A_2 \mathbin{\oplus} B_2) .\overline{(A_3 \mathbin{\oplus} B_3)} \\ X_1 = B_1(A_1 \mathbin{\oplus} B_1) .\overline{(A_2 \mathbin{\oplus} B_2)} . \overline{(A_3 \mathbin{\oplus} B_3)} \\ X_0 = B_0(A_0 \mathbin{\oplus} B_0) . \overline{(A_1 \mathbin{\oplus} B_1)}. \overline{(A_2 \mathbin{\oplus} B_2)} . \overline{(A_3 \mathbin{\oplus} B_3)} $$
we can then get our final result X as
$$ X = X_3 + X_2 + X_1 + X_0 $$