When you have a low disk situation with a log file taking up some GB and you have no disk space left what is the best course of action without losing any of logs?
Things I tried was to mv the log file and compress, but this will show up as deleted in lsof which might be a problem in the future.
Possibly gzip the log file and kill -1 the process. But is this something that you do a production server for services like httpd and mysql?
Thanks.
Ob
Best Answer
If storage is 100% full, the compression won't work as there is no space for a temporary file.
Copy logs to other storage.
scp -r /var/log/ otherhost:
Review and delete old log files.
find /var/log -mtime +7
Expand file system if necessary.
Compress some large files. Reload services to open a new log file.
gzip /var/log/httpd/access_log ; systemctl reload httpd.service
Implement logrotate or equivalent script to manage these automatically. The usual pattern is to move the current file to a new name, and reopen a new log file.
Consider implementing a remote log server and shipping logs off the host instead.
Whether sending a signal to a service or otherwise reloading it is acceptable is up to you. Of course, you can try it on a test system if this makes you nervous.
If you don't tell the service to open a new file there is another option: truncate in place.
cp /dev/null file.log
or logrotate optioncopytruncate
. However, beware the warning about this not being atomic from the lograte man page: