Why limit the total amount of shared memory available

linux-kernelmemory

I have some large memory processes that share memory on a CentOS 7 system. I am tuning the memory system for them. For kernel.shmmax and kernel.shmall the RedHat documentation states:

kernel.shmmax defines the maximum size in bytes of a single shared memory
segment that a Linux process can allocate in its virtual address
space.

kernel.shmall sets the total amount of shared memory pages that can be used
system wide.

Why are there such limits on shared memory? I can limit the total memory a user or process uses with limits or cgroups. Why would I want to limit the total shared memory available on the system? Does the system take a performance hit when it has a lot of shared memory to manage?

Best Answer

It's part of system hygiene and a bit of legacy. The cgroup controls are relatively new to Linux (2.6.24 and later), where shmmax/shmall were present in the kernel as early as the 1.2.x series:

ipc/shm.c

    {
            struct shminfo shminfo;
            if (!buf)
                    return -EFAULT;
            shminfo.shmmni = SHMMNI;
            shminfo.shmmax = SHMMAX;
            shminfo.shmmin = SHMMIN;
            shminfo.shmall = SHMALL;
            shminfo.shmseg = SHMSEG;
            err = verify_area (VERIFY_WRITE, buf, sizeof (struct shminfo));
            if (err)
                    return err;
            memcpy_tofs (buf, &shminfo, sizeof(struct shminfo));
            return max_shmid;
    }

The limiting philosophy of linux has evolved over the last 21 years. Back then, the design-pattern of global limits was dominant. You didn't want to use all of your RAM for IPC, so you needed to set a high-water-mark for it to ensure there was enough space for everything else. This can work with modern cgroups as well; individual limits for your processes, with a global limit to ensure swap is avoided and the whole system still works well.