Architecture – If a number is too big does it spill over to the next memory location

Architecturechexadecimalmemory

I've been reviewing C programming and there are just a couple things bothering me.

Let's take this code for example:

int myArray[5] = {1, 2, 2147483648, 4, 5};
int* ptr = myArray;
int i;
for(i=0; i<5; i++, ptr++)
    printf("\n Element %d holds %d at address %p", i, myArray[i], ptr);

I know that an int can hold a maximum value of positive 2,147,483,647. So by going one over that, does it "spill over" to the next memory address which causes element 2 to appear as "-2147483648" at that address? But then that doesn't really make sense because in the output it still says that the next address holds the value 4, then 5. If the number had spilled over to the next address then wouldn't that change the value stored at that address?

I vaguely remember from programming in MIPS Assembly and watching the addresses change values during the program step by step that values assigned to those addresses would change.

Unless I am remembering incorrectly then here is another question: If the number assigned to a specific address is bigger than the type (like in myArray[2]) then does it not affect the values stored at the subsequent address?

Example: We have int myNum = 4 billion at address 0x10010000. Of course myNum can't store 4 billion so it appears as some negative number at that address. Despite not being able to store this large number, it has no effect on the value stored at the subsequent address of 0x10010004. Correct?

The memory addresses just have enough space to hold certain sizes of numbers/characters, and if the size goes over the limit then it will be represented differently (like trying to store 4 billion into the int but it will appear as a negative number) and so it has no effect on the numbers/characters stored at the next address.

Sorry if I went overboard. I've been having a major brain fart all day from this.

Best Answer

No, it does not. In C, variables have a fixed set of memory addresses to work with. If you are working on a system with 4-byte ints, and you set an int variable to 2,147,483,647 and then add 1, the variable will usually contain -2147483648. (On most systems. The behavior is actually undefined.) No other memory locations will be modified.

In essence, the compiler will not let you assign a value that is too big for the type. This will generate a compiler error. If you force it to with a case, the value will be truncated.

Looked at in a bitwise way, if the type can only store 8 bits, and you try to force the value 1010101010101 into it with a case, you will end up with the bottom 8 bits, or 01010101.

In your example, regardless of what you do to myArray[2], myArray[3] will contain '4'. There is no "spill over". You are trying to put something that is more than 4-bytes it will just lop off everything on the high end, leaving the bottom 4 bytes. On most systems, this will result in -2147483648.

From a practical standpoint, you want to just make sure this never, ever happens. These sorts of overflows often result in hard-to-solve defects. In other words, if you think there is any chance at all your values will be in the billions, don't use int.