AWStats process dies while processing large logs

awstatsgeolocationperl

I have really large nginx log files – as much as 250MB.

When I have ran like 10 days of the month of log files – then the next daily logs would cause my awstats to die. Like so:

/usr/lib/cgi-bin/awstats.pl -config=mydomain.com -update
....
Flush history file on disk (unique hosts reach flush limit of 20000)
Flush history file on disk (unique hosts reach flush limit of 20000)
Killed

I know it has something to do with the big data – because when i delete the awstats generated database file – any (date) log files will run through awstats.pl just fine.

Best Answer

It looks to me like you have hit a hard resource limit while processing the logs. There's a good SU page on ulimit you can take a look at. TL;DR would be to take a look at your current limits with 'ulimit -a'; then watch your awstats process with something like 'top' on its next run. You will most likely see it hitting the memory or stack size limits.

Related Topic