In your example myApple
has the special value null
(typically all zero bits), and so is referencing nothing. The object that it originally referred to is now lost on the heap. There is no way to retrieve its location. This is known as a memory leak on systems without garbage collection.
If you originally set 1000 references to null, then you have space for just 1000 references, typically 1000 * 4 bytes (on a 32-bit system, twice that on 64). If those 1000 references originally pointed to real objects, then you allocated 1000 times the size of each object, plus space for the 1000 references.
In some languages (like C and C++), pointers always point to something, even when "uninitialized". The issue is whether the address they hold is legal for your program to access. The special address zero (aka null
) is deliberately not mapped into your address space, so a segmentation fault is generated by the memory management unit (MMU) when it is accessed and your program crashes. But since address zero is deliberately not mapped in, it becomes an ideal value to use to indicate that a pointer is not pointing to anything, hence its role as null
. To complete the story, as you allocate memory with new
or malloc()
, the operating system configures the MMU to map pages of RAM into your address space and they become usable. There are still typically vast ranges of address space that are not mapped in, and so lead to segmentation faults, too.
No, it does not. In C, variables have a fixed set of memory addresses to work with. If you are working on a system with 4-byte ints
, and you set an int
variable to 2,147,483,647
and then add 1
, the variable will usually contain -2147483648
. (On most systems. The behavior is actually undefined.) No other memory locations will be modified.
In essence, the compiler will not let you assign a value that is too big for the type. This will generate a compiler error. If you force it to with a case, the value will be truncated.
Looked at in a bitwise way, if the type can only store 8 bits, and you try to force the value 1010101010101
into it with a case, you will end up with the bottom 8 bits, or 01010101
.
In your example, regardless of what you do to myArray[2]
, myArray[3]
will contain '4'. There is no "spill over". You are trying to put something that is more than 4-bytes it will just lop off everything on the high end, leaving the bottom 4 bytes. On most systems, this will result in -2147483648
.
From a practical standpoint, you want to just make sure this never, ever happens. These sorts of overflows often result in hard-to-solve defects. In other words, if you think there is any chance at all your values will be in the billions, don't use int
.
Best Answer
The C standard doesn't mandate any particular way of representing negative signed numbers.
In most implementations that you are likely to encounter, negative signed integers are stored in what is called two's complement. The other major way of storing negative signed numbers is called one's complement.
The two's complement of an N-bit number
x
is defined as2^N - x
. For example, the two's complement of 8-bit1
is2^8 - 1
, or1111 1111
. The two's complement of 8-bit8
is2^8 - 8
, which in binary is1111 1000
. This can also be calculated by flipping the bits ofx
and adding one. For example:The one's complement of an N-bit number x is defined as x with all its bits flipped, basically.
Two's complement has several advantages over one's complement. For example, it doesn't have the concept of 'negative zero', which for good reason is confusing to many people. Addition, multiplication and subtraction work the same with signed integers implemented with two's complemented as they do with unsigned integers as well.