The C++ standard does not specify the size of integral types in bytes, but it specifies minimum ranges they must be able to hold. You can infer minimum size in bits from the required range. You can infer minimum size in bytes from that and the value of the CHAR_BIT
macro that defines the number of bits in a byte. In all but the most obscure platforms it's 8, and it can't be less than 8.
One additional constraint for char
is that its size is always 1 byte, or CHAR_BIT
bits (hence the name). This is stated explicitly in the standard.
The C standard is a normative reference for the C++ standard, so even though it doesn't state these requirements explicitly, C++ requires the minimum ranges required by the C standard (page 22), which are the same as those from Data Type Ranges on MSDN:
signed char
: -127 to 127 (note, not -128 to 127; this accommodates 1's-complement and sign-and-magnitude platforms)
unsigned char
: 0 to 255
- "plain"
char
: same range as signed char
or unsigned char
, implementation-defined
signed short
: -32767 to 32767
unsigned short
: 0 to 65535
signed int
: -32767 to 32767
unsigned int
: 0 to 65535
signed long
: -2147483647 to 2147483647
unsigned long
: 0 to 4294967295
signed long long
: -9223372036854775807 to 9223372036854775807
unsigned long long
: 0 to 18446744073709551615
A C++ (or C) implementation can define the size of a type in bytes sizeof(type)
to any value, as long as
- the expression
sizeof(type) * CHAR_BIT
evaluates to a number of bits high enough to contain required ranges, and
- the ordering of type is still valid (e.g.
sizeof(int) <= sizeof(long)
).
Putting this all together, we are guaranteed that:
char
, signed char
, and unsigned char
are at least 8 bits
signed short
, unsigned short
, signed int
, and unsigned int
are at least 16 bits
signed long
and unsigned long
are at least 32 bits
signed long long
and unsigned long long
are at least 64 bits
No guarantee is made about the size of float
or double
except that double
provides at least as much precision as float
.
The actual implementation-specific ranges can be found in <limits.h>
header in C, or <climits>
in C++ (or even better, templated std::numeric_limits
in <limits>
header).
For example, this is how you will find maximum range for int
:
C:
#include <limits.h>
const int min_int = INT_MIN;
const int max_int = INT_MAX;
C++:
#include <limits>
const int min_int = std::numeric_limits<int>::min();
const int max_int = std::numeric_limits<int>::max();
In C++11 there are some nice new convert functions from std::string
to a number type.
So instead of
atoi( str.c_str() )
you can use
std::stoi( str )
where str
is your number as std::string
.
There are version for all flavours of numbers:
long stol(string)
, float stof(string)
, double stod(string)
,...
see http://en.cppreference.com/w/cpp/string/basic_string/stol
Best Answer
In addition to what visitor said :
The function
void emplace_back(Type&& _Val)
provided by MSCV10 is non conforming and redundant, because as you noted it is strictly equivalent topush_back(Type&& _Val)
.But the real C++0x form of
emplace_back
is really useful:void emplace_back(Args&&...)
;Instead of taking a
value_type
it takes a variadic list of arguments, so that means that you can now perfectly forward the arguments and construct directly an object into a container without a temporary at all.That's useful because no matter how much cleverness RVO and move semantic bring to the table there is still complicated cases where a push_back is likely to make unnecessary copies (or move). For example, with the traditional
insert()
function of astd::map
, you have to create a temporary, which will then be copied into astd::pair<Key, Value>
, which will then be copied into the map :So why didn't they implement the right version of emplace_back in MSVC? Actually, it bugged me too a while ago, so I asked the same question on the Visual C++ blog. Here is the answer from Stephan T Lavavej, the official maintainer of the Visual C++ standard library implementation at Microsoft.
It's an understandable decision. Everyone who tried just once to emulate variadic template with preprocessor horrible tricks knows how disgusting this stuff gets.