Defining Type Aliases in C++ – Benefits and Use Cases

cportability

I have started to learn c++ for some time ago. It is a big subject and I am not very use to it yet. Thus is this question asked more of curiosity, than something else.
In the book that I read to now learn there is a discussion about portability. I agree that the discussion comes quite earky for someone in the beginning of their learning period, but when reading I came to Think about something.

What I wonder is: is there ever is a reason to define custom type to use instead of the fundamental builtin types? For example, would it be useful to define a type int_c as in "int custom"? The reason I think about this is since some code may be compiled for example both a 32-bit compiler and a 64-bit compiler. It seems as if it would be easier since if you change platform and all the integers are defined as int_c instead of int, since the latter would require changing type in every place an int is used?

However, I also know that <cstdint> contains definitions of different integer as int64_t. However I have not heard of a header with floating Point types.

So to conclude: Is there a reason to define custom integers or floats like int_c or double_c Or int other case what is the best apporach?

Best Answer

There is sometimes a reason to do so, however <stdint.h> (in C99) or <cstdint> (in C++03 or beter) make them less obvious.

First, there is a readability issue. If you define

typedef unsigned myhash_t;

and later always use myhash_t for numbers which are in fact hashes, you ease the understanding of your code. This does not help much against your own mistakes (e.g. forgetting to declare a parameter as myhash_t even if it is an hash) but it does enhance readability.

Then with <stdint.h> (etc...) you have many new integral types (most of them being system-specific synonyms of e.g. int and long), like int64_t (a signed int of exactly 64 bits), uintptr_t (an unsigned integer with the same size as pointers), int_fast8t (an integral type, which is computed quickly, of at least 8 bits, but it could be 16 bits if they run faster, etc...)

At last, you may place preprocessor tricks with e.g. #if combined to such typedef-s. For instance, you could define a color encoding as having 3 RGB (or 4, RGBA) components of

 typedef uint8_t color_t;

but some preprocessor trick might make that an uint16_t ....

FWIW, Ocaml and Ada are doing better than C++ in that respect. They are able to define private integral types (see e.g. ยง7.9.2 in Ocaml ref.), on which arithmetic is forbidden (unless explicitly allowed). In C++ you could have class MyHiddenInt { int x; /*etc*/ }; but it may be differently represented and handled (i.e. produce less efficient machine code) than some int. Details are of course ABI specific. On Linux/x86-64 with -O2 optimization the GCC 4.9 generated code (e.g. for an addition) would be the same.

These private integers sometimes make sense. For example on Unix and Posix file descriptors are int-s but doing any arithmetic on them is non-sense. (likewise for myhash_t probably, but you cannot express that constraint in C++).

For floating numbers it is even more important. On some processors (e.G. some GPGPUs), double precision arithmetic is so slow that you want to avoid it. On our laptop & dekstop PCs, it is the opposite: you almost always want to use double (which today means IEEE754 64 bits floating point), and use float (or short) only to squeeze memory consumption. These choices are really processor specific (not the same on a tablet or a supercomputer).

You could go even further, and have a type analysis which take into account physical dimensions. So you forbid (at compile time) adding kilograms with amperes or watts, and that the product of a speed (in meter per second) with a time (in second) is a length (in meter) which should not be acceptable as a file descriptor or an hash.

Read about abstract data types.

Related Topic