Ubuntu – Kernel attempts to kill MySQL with sigkill

kernelMySQLUbuntu

I'm running an Ubuntu server for MySQL.

Server info

  • Ubuntu 12.10
  • MySQL installed via apt
  • RAM: 512M
  • innodb_buffer_pool_size : 300M
  • There is no other memory intensive application running on this box.

Problem

Every morning, at approx. 6:40am something happens to cause a noticeable change in memory:

https://dl.dropbox.com/u/12520837/mem.s.png

At the same time, a systematic "kill" of running processes seems to occur, causing MySQL to restart.

Apr 10 06:43:40 mysql-01 kernel: [1866472.511966] select 1 (init), adj 0, size 41, to kill

Apr 10 06:43:40 mysql-01 kernel: [1866472.511973] select 385 (dbus-daemon), adj 0, size 44, to kill

Apr 10 06:43:40 mysql-01 kernel: [1866472.511975] select 389 (rsyslogd), adj 0, size 124, to kill

Apr 10 06:43:40 mysql-01 kernel: [1866472.511982] select 4578 (snmpd), adj 0, size 160, to kill

Apr 10 06:43:40 mysql-01 kernel: [1866472.514157] select 1 (init), adj 0, size 41, to kill

Apr 10 06:43:40 mysql-01 kernel: [1866472.514164] select 385 (dbus-daemon), adj 0, size
44, to kill

Apr 10 06:43:40 mysql-01 kernel: [1866472.514166] select 389 (rsyslogd), adj 0, size 124, to kill

Apr 10 06:43:40 mysql-01 kernel: [1866472.514171] select 4578 (snmpd), adj 0, size 160, to kill

Apr 10 06:43:44 mysql-01 /etc/mysql/debian-start[21807]: Upgrading MySQL tables if necessary.

Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21810]: /usr/bin/mysql_upgrade: the '–basedir' option is always ignored

Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21810]: Looking for 'mysql' as: /usr/bin/mysql

Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21810]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck

Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21810]: This installation of MySQL is already upgraded to 5.5.29, use –force if you still need to run mysql_upgrade

Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21821]: Checking for insecure root accounts.
Apr 10 06:43:45 mysql-01 /etc/mysql/debian-start[21826]: Triggering myisam-recover for all MyISAM tables

Any help diagnosing this would be much appreciated!

Best Answer

The kernel is detecting that it is running out of memory, possibly because some process is running wild.

Usually OOM killer will try to identify this process, and kill it. The reason it is killing mysql is because this is probably the process that is currently taking the most amount of ram, so it's a very likely candidate for the running wild process.

However, it also seems like snmpd is the culprit. (it is taking 160MB's which is a lot) snmpd is a deamon responsible for listening for snmp traffic, it seems weird for it to take this much memory.

Since this is happening each day at the same time, check your daily cron jobs. And check your snmpd log file. Also check for incomming connections around that time. (from sshd)

All these log files should be showing up somewhere in /var/log/xxx

If this turns up nothing unexpected, look in the log files for the other processes mentioned in the log. (mysql and rsyslogd)

Also, from your graph you only have 66MB free on average, and are running in to memory issues way more then just at 6.40, almost 20% of the time you seem to have less then a few MB's free, never more then 100MB free. (if I correctly see that the magenta bar is the free memory?)