How to “batch” output of “tail -F” or output all data in a stream every X seconds

loggingpipetailunix

I am trying to monitor a log file in real-time.
Say I issue a command such as:

tail -F mysystem.log|grep -i error|mail …

This is supposed to monitor my log file, every time it sees an error, email the line containing the error.

However, it is possible that something blows up and I get hundreds of errors a second. I don't want to kill my mail server as well by sending hundreds of emails a second. So I would like some sort of delay operator:

tail -F mysystem.log|grep -i error|window X|mail …

This will hold all my error messages for X seconds, then release them together. This way, at most, I'll get an email every X seconds.

As a bonus, I'd love to be able to do the following:

tail -F mysystem.log|grep -i error|window X Y|mail …

Same as the previous command, but if the number of lines in window is greater than Y, then send out an email containing Y messages (window clears and resets).

How can I do this without writing a PERL program? I'd like to stick to what is already built into UNIX.

Best Answer

This is solved quite nicely with rsyslog and the ommail (and omfile module if your log is not syslog compatible). I have multiple devices from different vendors in a secure data center all sending their syslog events to my central rsyslog server. Using the ommail module, I setup email alerts on various conditions; the built-in interval threshold ensures that I don't get flooded with messages (I set it to send a maximum of one alert per type every 60 minutes).

Works well, easy to setup: just make sure you get the latest version of rsyslog; Ubuntu 10.04.2LTS still ships with 4.2 which was released almost two years ago and is missing some bug fixes.

Related Topic