The easiest way to think about cross-platform programming is that you need a strategy for dealing with changes in the behaviour of the program's environment, across space (i.e. different hardware architectures and operating systems) and time (i.e. different versions of an operating system with new features and different bugs). There already is a pattern for implementing strategies.
You can imagine defining a polymorphic type with member functions that define the "logic" of what you're trying to do. These member functions call abstract member functions that are responsible for entertaining the underlying platform's needs. Clients call a factory method to get an instance of your class: the factory method chooses what concrete implementation to return based on the details of the platform.
This approach results in a public interface with no templates and (with the information I know about the problem) no explicit pointer use. Clients just call a factory method and then use the public interface on your virtual base class. It also completely hides all of the complexity you want to introduce regarding headers and their locations - your clients just include the headers for your virtual types.
Almost every word you might think of adding as a keyword to a language has almost certainly been used as a variable name or some other part of working code. This code would be broken if you made that word a keyword.
The incredibly lucky thing about auto
is that it already was a keyword, so people didn't have variables with that name, but nobody used it, because it was the default. Why type:
auto int i=0;
when
int i=0;
meant exactly the same thing?
I suppose somewhere on the planet there was some small amount of code that used 'auto' the old way. But it could be fixed by removing the 'auto' and it would be working again. So it was a pretty obvious choice to repurpose the keyword.
I also happen to think it's a clearer meaning. If you've worked with variants and such, when you see var
you may think that the declaration is somehow less strongly typed than if you pressed all the keys yourself on the keyboard to specify the type of the variable. To me, auto
makes it clearer that you're asking the compiler to automatically deduce the type, which is just as strong as if you had specified it yourself. So it really was a very lucky break that made a good name available to the committee.
To clarify the (small) breaking:
If you had
auto int i=0;
and tried to compile with a C++ 11 compiler, you will now get an error such as
error C3530: 'auto' cannot be combined with any other type-specifier
This is trivial, you just remove either the auto or the int and recompile.
There is a bigger problem though. If you had
auto i = 4.3;
C and the really old C++ would make i
an int
(as it would if you left off auto
- default declaration was int
). If you have gone a really long time without compiling this code, or have been using old compilers, you could have some of this code, at least in theory. C++ 11 would make it a double
since that's what 4.3 is. (Or maybe a float
, I'm still in Boxing Day mode, but the point is, not an int
.) This might introduce subtle bugs throughout your app. And without any warnings or errors from the compiler. People in this boat should search globablly for auto
to make sure they weren't using it the old way before they move to a C++ 11 compiler. Luckily, such code is extremely rare.
Best Answer
There is sometimes a reason to do so, however
<stdint.h>
(in C99) or<cstdint>
(in C++03 or beter) make them less obvious.First, there is a readability issue. If you define
and later always use
myhash_t
for numbers which are in fact hashes, you ease the understanding of your code. This does not help much against your own mistakes (e.g. forgetting to declare a parameter asmyhash_t
even if it is an hash) but it does enhance readability.Then with
<stdint.h>
(etc...) you have many new integral types (most of them being system-specific synonyms of e.g.int
andlong
), likeint64_t
(a signed int of exactly 64 bits),uintptr_t
(an unsigned integer with the same size as pointers),int_fast8t
(an integral type, which is computed quickly, of at least 8 bits, but it could be 16 bits if they run faster, etc...)At last, you may place preprocessor tricks with e.g.
#if
combined to suchtypedef
-s. For instance, you could define a color encoding as having 3 RGB (or 4, RGBA) components ofbut some preprocessor trick might make that an
uint16_t
....FWIW, Ocaml and Ada are doing better than C++ in that respect. They are able to define private integral types (see e.g. ยง7.9.2 in Ocaml ref.), on which arithmetic is forbidden (unless explicitly allowed). In C++ you could have
class MyHiddenInt { int x; /*etc*/ };
but it may be differently represented and handled (i.e. produce less efficient machine code) than someint
. Details are of course ABI specific. On Linux/x86-64 with-O2
optimization the GCC 4.9 generated code (e.g. for an addition) would be the same.These private integers sometimes make sense. For example on Unix and Posix file descriptors are
int
-s but doing any arithmetic on them is non-sense. (likewise formyhash_t
probably, but you cannot express that constraint in C++).For floating numbers it is even more important. On some processors (e.G. some GPGPUs),
double
precision arithmetic is so slow that you want to avoid it. On our laptop & dekstop PCs, it is the opposite: you almost always want to usedouble
(which today means IEEE754 64 bits floating point), and usefloat
(orshort
) only to squeeze memory consumption. These choices are really processor specific (not the same on a tablet or a supercomputer).You could go even further, and have a type analysis which take into account physical dimensions. So you forbid (at compile time) adding kilograms with amperes or watts, and that the product of a speed (in meter per second) with a time (in second) is a length (in meter) which should not be acceptable as a file descriptor or an hash.
Read about abstract data types.