C Programming – Have Variable Width Types Been Replaced by Fixed Types?

cprogramming practices

I came across an interesting point today in a review over on Code Review. @Veedrac recommened in this answer that variable size types (e.g. int and long) be replaced with fixed size types like uint64_t and uint32_t. Citation from the comments of that answer:

The sizes of int and long (and thus the values they can hold) are
platform-dependent. On the other hand, int32_t is always 32 bits long.
Using int just means that your code works differently on different
platforms, which is generally not what you want.

The reasoning behind the standard not fixing the common types is partially explained here by @supercat. C was written to be portable across architectures, in contrary to assembly which was usually used for systems programming at the time.

I think the design intention was originally that each type other than
int be the smallest thing that could handle numbers of various sizes,
and that int be the most practical "general-purpose" size that could
handle +/-32767.

As for me, I've always used int and not really worried about the alternatives. I've always thought that it is the most type with best performance, end of story. The only place I thought fixed width would be useful is when encoding data for storage or for transfer over a network. I've rarely seen fixed width types in code written by others either.

Am I stuck in the 70s or is there actually a rationale for using int in the era of C99 and beyond?

Best Answer

There is a common and dangerous myth that types like uint32_t save programmers from having to worry about the size of int. While it would be helpful if the Standards Committee were to define a means of declaring integers with machine-independent semantics, unsigned types like uint32_t have semantics which are too loose to allow code to be written in a fashion which is both clean and portable; further, signed types like int32 have semantics which are for many applications ways defined needlessly tightly and thus preclude what would otherwise be useful optimizations.

Consider, for example:

uint32_t upow(uint32_t n, uint32_t exponent)
{
  while(exponent--)
    n*=n;
  return n;
}

int32_t spow(int32_t n, uint32_t exponent)
{
  while(exponent--)
    n*=n;
  return n;
}

On machines where int either cannot hold 4294967295, or can hold 18446744065119617025, the first function will be defined for all values of n and exponent, and its behavior will not be affected by the size of int; further, the standard will not require that it yield different behavior on machines with any size of int Some values of n and exponent, however, will cause it to invoke Undefined Behavior on machines where 4294967295 is representable as an int but 18446744065119617025 is not.

The second function will yield Undefined Behavior for some values of n and exponent on machines where int cannot hold 4611686014132420609, but will yield defined behavior for all values of n and exponent on all machines where it can (the specifications for int32_t imply that two's-complement wrapping behavior on machines where it is smaller than int).

Historically, even though the Standard said nothing about what compilers should do with int overflow in upow, compilers would have consistently yielded the same behavior as if int had been large enough not to overflow. Unfortunately, some newer compilers may seek to "optimize" programs by eliminating behaviors not mandated by the Standard.

Related Topic