How OS Limits Stack and Heap Sizes

heaplinuxmemorystack

Note: if you need to consider a specific OS to be able to answer, please consider Linux.

Whenever I run a program, it will be given a virtual memory space to run in, with an area for its stack and one for its heap.

Question 1: do the stack and the heap have a static size limit (e.g., 2 gigabytes each), or is this limit dynamic, changing according to the memory allocations during the execution of the program (i.e., 4 gigabytes total to be used by both, so if a program only uses the stack, it will be able to have a stack with 4 gigabytes)?

Question 2: How is the limit defined? Is it the total available RAM memory?

Question 3: What about the text (code) and data sections, how are they limited?

Best Answer

There are two different memory limits. The virtual memory limit and the physical memory limit.

Virtual Memory

The virtual memory is limited by size and layout of address space available. Usually at the very beginning is the executable code and static data and past that grows the heap, while at the end is area reserved by kernel, before it the shared libraries and stack (which on most platforms grows down). That gives heap and stack free space to grow, the other areas being known at process startup and fixed.

The free virtual memory is not initially marked as usable, but is marked such during allocation. While heap can grow to all available memory, most systems don't auto-grow stacks. IIRC default limit for stack is 8MiB on Linux and 1MiB on Windows and can be changed on both systems. The virtual memory also contains any memory-mapped files and hardware.

One reason why stack can't be auto-grown (arbitrarily) is that multi-threaded programs need separate stack for each thread, so they would eventually get in each other's way.

On 32-bit platforms the total amount of virtual memory is 4GiB, both Linux and Windows normally reserving last 1GiB for kernel, giving you at most 3GiB of address space. There is a special version of Linux that does not reserve anything giving you full 4GiB. It is useful for the rare case of large databases where the last 1GiB saves the day, but for regular use it is slightly slower due to the additional page table reloads.

On 64-bit platforms the virtual memory is 64EiB and you don't have to think about it.

Physical Memory

Physical memory is usually only allocated by the operating system when the process needs to access it. How much physical memory a process is using is very fuzzy number, because some memory is shared between processes (the code, shared libraries and any other mapped files), data from files are loaded into memory on demand and discarded when there is memory shortage and "anonymous" memory (the one not backed by files) may be swapped.

On Linux what happens when you run out of physical memory depends on the vm.overcommit_memory system setting. The default is to overcommit. When you ask the system to allocate memory, it gives some to you, but only allocates the virtual memory. When you actually access the memory, it will try to get some physical memory to use, discarding data that can be reread or swapping things out as necessary. If it finds it can't free up anything, it will simply remove the process from existence (there is no way to react, because that reaction could require more memory and that would lead to endless loop).

This is how processes die on Android (which is also Linux). The logic was improved with logic which process to remove from existence based on what the process is doing and how old it is. Than android processes simply stop doing anything, but sit in the background and the "out of memory killer" will kill them when it needs memory for new ones.