Centos – tmpfs file system is full. Need assistance to increase this or remove unneeded software

centosext4rootfs

Almost at a loss here. Been running a fast server for a couple of years with Plesk. Hosting our own website and services.
Over the last few days, I noticed a lot of problems but didn't think to check available file space. Checking this last night, I get:

[root@server ~]# df -H
Filesystem      Size  Used Avail Use% Mounted on
rootfs           22G   21G     0 100% /
/dev/root        22G   21G     0 100% /
devtmpfs         34G  238k   34G   1% /dev
/dev/sda2       136G  7.4G  121G   6% /var
tmpfs            34G     0   34G   0% /dev/shm
/dev/root        22G   21G     0 100% /var/named/chroot/etc/named
/dev/sda2       136G  7.4G  121G   6% /var/named/chroot/var/named
/dev/root        22G   21G     0 100% /var/named/chroot/etc/named.rfc1912.zones
/dev/root        22G   21G     0 100% /var/named/chroot/etc/rndc.key
/dev/root        22G   21G     0 100% /var/named/chroot/usr/lib64/bind
/dev/root        22G   21G     0 100% /var/named/chroot/etc/named.iscdlv.key
/dev/root        22G   21G     0 100% /var/named/chroot/etc/named.root.key

I am fairly knowledgeable about server systems, but cannot get any answers as to what is filling this space up, how to remove it. Or, increase the rootfs size; if that's even possible.
Related to this, maybe, is that my backups are GONE! on a mounted NAS. Been using rsync and a custom msqlbackup script to do this, but operations on the server have been a bit haywire over the last few days.
Any pointers to what I can do? I have searched the other posts here and on other websites, but none that can help me identify what I can do. I really would appreciate your assistance.
Thanks

Some further information:
Output of fdisk -l:

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2612    20971520   83  Linux
/dev/sda2            2612       19324   134244352   83  Linux
/dev/sda3           19324       19390      525312   82  Linux swap / Solaris

Contents of /etc/fstab:

/dev/sda1       /       ext4    errors=remount-ro       0       1
/dev/sda2       /var    ext4    defaults        0       2
/dev/sda3       none    swap    defaults        0       0
proc            /proc   proc    defaults                0       0
sysfs           /sys    sysfs   defaults                0       0
tmpfs           /dev/shm        tmpfs   defaults        0       0
devpts          /dev/pts        devpts  defaults        0       0
10.16.101.3:/nas-000009/mininas-001783 /mnt nfs rw 0    0

After going into recovery and removing some log files to clear a wee bit of space, the output of df -h is oddly different:

Filesystem      Size  Used Avail Use% Mounted on
rootfs           20G   20G     0 100% /
/dev/root        20G   20G     0 100% /
devtmpfs         32G  232K   32G   1% /dev
/dev/sda2       126G  6.6G  113G   6% /var
tmpfs            32G     0   32G   0% /dev/shm
/dev/root        20G   20G     0 100% /var/named/chroot/etc/named
/dev/sda2       126G  6.6G  113G   6% /var/named/chroot/var/named
/dev/root        20G   20G     0 100% /var/named/chroot/etc/named.rfc1912.zones
/dev/root        20G   20G     0 100% /var/named/chroot/etc/rndc.key
/dev/root        20G   20G     0 100% /var/named/chroot/usr/lib64/bind
/dev/root        20G   20G     0 100% /var/named/chroot/etc/named.iscdlv.key
/dev/root        20G   20G     0 100% /var/named/chroot/etc/named.root.key

rootfs is now only 20GB in size, 100% usage. Dynamically changing on contents?

Best Answer

Some help from Iain identified that the mounted networked attached storage had gone away. All the backups were being written to the /rootfs filling it very quickly. I'll just need to figure out why that happened and ensure that it doesn't happen again.

Related Topic