It really isn't worth worrying too much about this detail. Adding 1 to a number is such a common idiom in HDL that synthesis tools have highly-evolved methods for dealing wtih it. Also, most modern FPGAs have dedicated, hard-wired fast carry logic that does not consume logic cells, and that synthesis tools know how to take advantage of.
Signed integer divide is almost always done by taking absolute values, dividing, and then correcting the signs of quotient and remainder, or at least was in earlier CPUs. They may have fancier tricks nowadays. But the fact that dividing by a positive number always truncates toward zero, rather than toward minus infinity, suggests that this is how it's done. In addition to checking for divide by zero, though, it's important to test for dividing the maximum negative number by -1, because that would produce one more than the maximum positive number.
Signed integer multiplies, however, are never done by taking absolute values, multiplying, and then negating if necessary. The difference between a signed integer and an unsigned integer is simply that the msb has a negative weight if it is signed. An unsigned byte has bit weights of 128, 64, 32, 16, 8, 4, 2, and 1. A signed byte has bit weights of -128, 64, 32, 16, 8, 4, 2, and 1. So it's easy to design hardware that takes that into account, using a subtraction instead of an addition when multiplying by the leftmost bit.
Another way of looking at it is that if a byte has a 1 in the msb, then signed value equals the unsigned value minus 256. This means that if you have an unsigned multiplier, you can do a signed multiply pretty easily. If one number has its sign bit set, you subtract the other number from the high half of the result; if the other number has its sign bit set, you subtract the first number from the high half of the result. And if you don't need the high half at all (if you know the numbers are small enough), then there is no difference between signed and unsigned multiply. (I used to do this a lot when I was programming the 6801 and 6809 decades ago.)
BTW, standard floating point representations are always sign-magnitude, rather than two's complement, so they do arithmetic more the way humans do.
Best Answer
In modulo 2n arithmetic -1 and 2n-1 are equivilent. It follows that if the output is the same size as the input then we can used a modulo 2n multiplier for both signed and unsigned operations.
However if the output is larger than the inputs this property no longer holds. Consider for example multipying the 8 bit number 11111111 (255 if interpreted as straight binary, -1 if interpreted as 2's complement) with itself to produce a 16 bit result. For signed numbers the correct result is 0000000000000001. However for unsigned numbers the correct result is 1111111000000001 (65025 in decimal)
If you want to think of this in modular arithmetic terms you can note that -1 and 255 are the same modulo 256 but different modulo 65536.
This is why when you look at (for example) the arm instruction set you see only one 32*32->32 multiply instruction but two different 32*32->64 multiply instructions.
Division (in the sense we think of it on computers) is not a modular arithmetic operation. So there is no reason to expect an equivilence between signed and unsigned division and indeed there isn't one.
Again to give an example consider 11111110 / 00000010 . In unsigned arithmetic this would result in 01111111 (127) in signed arithmetic it would result in 11111111 (-1)