There are two different memory limits. The virtual memory limit and the physical memory limit.
Virtual Memory
The virtual memory is limited by size and layout of address space available. Usually at the very beginning is the executable code and static data and past that grows the heap, while at the end is area reserved by kernel, before it the shared libraries and stack (which on most platforms grows down). That gives heap and stack free space to grow, the other areas being known at process startup and fixed.
The free virtual memory is not initially marked as usable, but is marked such during allocation. While heap can grow to all available memory, most systems don't auto-grow stacks. IIRC default limit for stack is 8MiB on Linux and 1MiB on Windows and can be changed on both systems. The virtual memory also contains any memory-mapped files and hardware.
One reason why stack can't be auto-grown (arbitrarily) is that multi-threaded programs need separate stack for each thread, so they would eventually get in each other's way.
On 32-bit platforms the total amount of virtual memory is 4GiB, both Linux and Windows normally reserving last 1GiB for kernel, giving you at most 3GiB of address space. There is a special version of Linux that does not reserve anything giving you full 4GiB. It is useful for the rare case of large databases where the last 1GiB saves the day, but for regular use it is slightly slower due to the additional page table reloads.
On 64-bit platforms the virtual memory is 64EiB and you don't have to think about it.
Physical Memory
Physical memory is usually only allocated by the operating system when the process needs to access it. How much physical memory a process is using is very fuzzy number, because some memory is shared between processes (the code, shared libraries and any other mapped files), data from files are loaded into memory on demand and discarded when there is memory shortage and "anonymous" memory (the one not backed by files) may be swapped.
On Linux what happens when you run out of physical memory depends on the vm.overcommit_memory
system setting. The default is to overcommit. When you ask the system to allocate memory, it gives some to you, but only allocates the virtual memory. When you actually access the memory, it will try to get some physical memory to use, discarding data that can be reread or swapping things out as necessary. If it finds it can't free up anything, it will simply remove the process from existence (there is no way to react, because that reaction could require more memory and that would lead to endless loop).
This is how processes die on Android (which is also Linux). The logic was improved with logic which process to remove from existence based on what the process is doing and how old it is. Than android processes simply stop doing anything, but sit in the background and the "out of memory killer" will kill them when it needs memory for new ones.
In general, mutable memory location will not be shared between applications unless explicitly asked for. Thus, when libcnt
writes to cnt_loads
, the page on which cnt_loads
resides on will be duplicated. This behaviour is known as copy-on-write (COW). The same thing happens if you duplicated a process with fork()
: the child process will not share writeable memory with the parent, if any write occurs to "forked" page, the page will be duplicated.
If you want to use shared memory for interprocess communication, you should use SystemV shared memory, POSIX shared memory or mmap instead. Note that these methods are somewhat persistent, i.e. you will need to remove the shared memory object after use.
You can approximate the behaviour you originally wanted using shared memory by defining __attribute__((constructor))
and __attribute__((destructor))
functions for your shared library. The constructor
function is run every time the library is opened, so you can use it to initialize/open the shared memory and increment the the load count. If you also maintain a reference count (how many times the shared library is open in the system right now) with the destructor
---in addition to the load count---, your can properly remove the shared memory once the reference count falls to 0.
Note that using shared memory for inter-process communication absolutely requires some form of mutual exclusion, for example a semaphore or mutex. Failing to synchronize properly will lead to race conditions (for example, what happens when two processes open your library at precisely same time?). You may avoid mutual exclusion if you can increment/decrement the counters in an atomic way. I recommend you to use OS provided inter-process semaphores instead of atomics, because atomic operations are tricky to utilize properly, and your problem is not performance critial at all (thus no need for lock-free operations).
Best Answer
Who said the compiler will reserve any space (could be register only).
This is completely undefined.
All that you can say is that it (
x
) can only be accessed from inside the inner block.How the compiler allocates memory (on a stack if it even exists) is completely upto the compiler (as the memory region may be re-used for multiple objects (if the compiler can prove that their lifespans do not overlap)).
Undetermined.
Undetermined.
But if
x
was a class object then the constructor will only be run if the block is entered.The compiler may not even allocate memory.
Yes