On remote linux-based devices, I thought I'd use logrotate to manage any core files our appliance may create. But it seems logrotate considers every core file a unique file since the filename includes the PID. This breaks the way logrotate normally rotates files. E.g.:
core_123
core_222
core_555
Instead of seeing these as 3 variations of the same file, it sees this as 3 unique files. So if I had rotate 50
in /etc/logrotate.d/core
, it would be willing to rotate through 50 different core_123
files, and 50 different core_222
files, etc., resulting in potentially hundreds or thousands of files. Instead, I want to ensure that logrotate manages a maximum of 50 core_* files.
This is the exact logrotate file I was trying to make work:
/mycores/core_* {
compress
daily
maxage 28
missingok
nocreate
nodelaycompress
olddir /mycores/old
rotate 50
}
I suspect this isn't possible with logrotate, but I figured I'd post on serverfault just in case I missed something in the documentation.
Best Answer
I combined
logrotate
with some shell magic to get the desired effect. Basically logrotate is still in charge of moving and compressing the files, and then kicking off the script needed to delete the old files as well as ensuring the directory doesn't have more than 50 or so files. Here is what I did:I'm not the best bash scripter out there, so that
postrotate
script section might be heavier than necessary... :)