Appending output to a file at the same time truncating it to N lines

loggingunix

I'm trying to find a simple way to append/write to a log, while at the same time keeping it the log trimmed to a reasonable size. I would prefer to not just append files forever, and then have to have a clean up log script. I can't wrap my head around how I would accomplish this gracefully without using some second file as a temporary holder.

Things I've looked at for reference:

I've gone through the advanced scripting guide
http://tldp.org/LDP/abs/html/io-redirection.html

Combined output from two commands:
Bash – How to output two commands to file?

Ideally I would have something like (I know this doesn't work)

(tail-n 1000 foo.log; ./foo.sh) > foo.log

which would keep the last 1000 lines from my on going log and then append my new output for the current run of foo.sh.

I can't think of a way to use the append redirect >> and limit the original file without wrapping the call to foo.sh in some other bar.sh

tail -n 1000 foo.log > tmp.log
mv tmp.log foo.log
./foo.sh >> foo.log

This just seems kludgey.

Perhaps my answer is to have foo.sh not rely on STDOUT as a place to send log messages, but rather opens the file directly.

FOLLOW UP EDIT:

The prevailing opinion is that this is not recommended. An opinion that I appreciate. However, the server this is going on is outside of my control and not really going to be under a…. vigilant administrator. Its a box where lots of different groups own parts of the box, but no one is responsible for the total health of the box. I could just let the log build forever, and it probably wouldn't matter in the fullness of time, but I'd like to do what I can to reign it in, since i know the final admin won't do anything. So, using crontab to run logrotate is out for me. I'm just looking for something I can do with the limited point of a single command.

Best Answer

If your version of sed has the --in-place switch you could use that. It still does a cp and mv but it's behind the scenes so the script would look a little less messy depending on how you feel about sed syntax. Otherwise your tail (you missed one in your edit) and append method is the way I'd approach it.

Another approach using sed would be to keep your log upside down with the newest entry at the beginning.

sed -ni '1s/^/New Entry\n/;100q;p' logfile

That's all there is to it (other than getting "New Entry" to be what you want). That will keep a rolling 100 line log file in reverse chronological order. The advantage is that you don't have to know how many lines are in the file (tail takes care of this for you, but sed won't). This assumes to some extent that your log entries consist of a single line. If you're logging a significant amount of output from a script I think you're back to tail and append.

Yet another option which might work well for detailed multi-line output would be to let the filesystem do the work for you. Each log entry would be a separate file.

#!/bin/bash
# *** UNTESTED ***
logdir="/path/to/your/log/dir"

# run the script and log its output to a file named with the date and time
script > "$logdir/$(date "+%Y%m%dT%H%M%S").log"

# maintain the logs
# keep the last 100 days of daily logfiles
# (you could keep the last 100 hours of hourly logfiles, for example,
# by using -mmin and doing some math)

# the count is intended to keep old entries from being deleted
# leaving you with nothing if there are no new entries
count=$(find "$logdir" -maxdepth 1 -type f | wc -l)
if (( count > 100 )) 
then
    find "$logdir" -maxdepth 1 -type f -mtime +100 -delete
fi