Centos – Multiple tmpfs exists with total combined space more than total memory

centoscentos7systemdtmpfs

I have a VPS with 7.5G ram running CentOS 7, and tbh I haven't done much optimizations etc as its serving the site pretty good. Today I was going to create a ramdisk for mysql temp tables, so I thought I should first check if there is any ramdisk or tmpfs already exists, and I found this.

Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1       XXXG  XXXG  XXXG   XX% /
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  324K  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           783M     0  783M   0% /run/user/0

I am an average *nix user, so it looks to me that there are already multiple tmpfs and their combined max limits is actually more than the total memory I have. So should I be worried that there are multiple tmpfs exists which can take all the memory available and bring the machine to its knees? Or these tmpfs are created by the OS itself and will not create any issue.

Update 1
@ Michael. It looks I'm heading towards trouble. By adding 1G file in /run the available free memory in top reduced by 1G and buff/cache increased by 1G. So it means if any user/package/script etc manage to write some stuff in those tmpfs, the system can go down. Any ideas how I can backtrace these tmpfs and figure out why/how they are created.

Update 2
running swapon -e returns

NAME      TYPE      SIZE USED PRIO
/dev/vda2 partition   8G   0B   -1  

means my swan is actually living in the memory itself, which means if the system attempt to use SWAP to move stuff away from memory, the stuff will again end in the memory, just in another place. So any suggestions to cure this situation.

Best Answer

Regarding where these tmpfs foldes come from:

  • /dev/shm is a standard feature of some Linux distros. Here shm stands for "Shared Memory". It's a ramdisk that's created by default. It's got 0 GB used, indicating you aren't using it.
  • /run/user/0 matches a pattern used by by systemd-based systems, which you appear to have. systemd allows you to specify memory limits for "service units" that run. Try sudo grep -R MemoryLimit /etc/systemd to see if you find any unit files that specify a memory limit.
  • I believe /sys/fs/cgroup is also created by systemd.

In other words, it looks you've got a number of standard RAM disks created, which happen to be overbooking your memory if you tried to fill them all up at once. I wouldn't worry about the mere existence of these files since they are all hardly used in your "screenshot".

If you want to limit memory usage of particular applications, systemd makes it easy to limit this on a per "service" basis. Read up on how to use MemoryLimit in your "service unit" config files under /etc/systemd.