This is caused by a livelock when ntpd calls adjtimex(2) to tell the kernel to insert a leap second. See lkml posting http://lkml.indiana.edu/hypermail/linux/kernel/1203.1/04598.html
Red Hat should also be updating their KB article as well. https://access.redhat.com/knowledge/articles/15145
UPDATE: Red Hat has a second KB article just for this issue here: https://access.redhat.com/knowledge/solutions/154713 - the previous article is for an earlier, unrelated problem
The work-around is to just turn off ntpd. If ntpd already issued the adjtimex(2) call, you may need to disable ntpd and reboot to be 100% safe.
This affects RHEL 6 and other distros running newer kernels (newer than approx 2.6.26), but not RHEL 5.
The reason this is occurring before the leap second is actually scheduled to occur is that ntpd lets the kernel handle the leap second at midnight, but needs to alert the kernel to insert the leap second before midnight. ntpd therefore calls adjtimex(2) sometime during the day of the leap second, at which point this bug is triggered.
If you have adjtimex(8) installed, you can use this script to determine if flag 16 is set. Flag 16 is "inserting leap second":
adjtimex -p | perl -p -e 'undef $_, next unless m/status: (\d+)/; (16 & $1) && print "leap second flag is set:\n"'
UPDATE:
Red Hat has updated their KB article to note: "RHEL 6 customers may be affected by a known issue that causes NMI Watchdog to detect a hang when receiving the NTP leapsecond announcement. This issue is being addressed in a timely manner. If your systems received the leapsecond announcement and did not experience this issue, then they are no longer affected."
UPDATE: The above language was removed from the Red Hat article; and a second KB solution was added detailing the adjtimex(2) crash issue: https://access.redhat.com/knowledge/solutions/154713
However, the code change in the LKML post by IBM Engineer John Stultz notes there may also be a deadlock when the leap second is actually applied, so you may want to disable the leap second by rebooting or using adjtimex(8) after disabling ntpd.
FINAL UPDATE:
Well, I'm no kernel dev, but I reviewed John Stultz's patch again here: https://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=6b43ae8a619d17c4935c3320d2ef9e92bdeed05d
If I'm reading it right this time, I was wrong about there being another deadlock when the leap second is applied. That seems to be Red Hat's opinion as well, based on their KB entry. However, if you have disabled ntpd, keep it disabled for another 10 minutes, so that you don't hit the deadlock when ntpd calls adjtimex(2).
We'll find out if there are any more bugs soon :)
POST-LEAP SECOND UPDATE:
I spent the last few hours reading through the ntpd and pre-patch (buggy) kernel code, and while I may be very wrong here, I'll attempt to explain what I think was going on:
First, ntpd calls adjtimex(2) all the time. It does this as part of its "clock loop filter", defined in local_clock in ntp_loopfilter.c. You can see that code here: http://www.opensource.apple.com/source/ntp/ntp-70/ntpd/ntp_loopfilter.c (from ntp version 4.2.6).
The clock loop filter runs quite often -- it runs every time ntpd polls its upstream servers, which by default is every 17 minutes or more. The relevant bit of the clock loop filter is:
if (sys_leap == LEAP_ADDSECOND)
ntv.status |= STA_INS;
And then:
ntp_adjtime(&ntv)
In other words, on days when there's a leap second, ntpd sets the "STA_INS" flag and calls adjtimex(2) (via its portability-wrapper).
That system call makes its way to the kernel. Here's the relevant kernel code: https://github.com/mirrors/linux/blob/a078c6d0e6288fad6d83fb6d5edd91ddb7b6ab33/kernel/time/ntp.c
The kernel codepath is roughly this:
- line 663 - start of do_adjtimex routine.
- line 691 - cancel any existing leap-second timer.
- line 709 - grab the ntp_lock spinlock (this lock is involved in the possible livelock crash)
- line 724 - call process_adjtimex_modes.
- line 616 - call process_adj_status.
- line 590 - set time_status global variable, based on flags set in adjtimex(2) call
- line 592 - check time_state global variable. in most cases, call ntp_start_leap_timer.
- line 554 - check time_status global variable. STA_INS will be set, so set time_state to TIME_INS and call hrtimer_start (another kernel function) to start the leap second timer. in the process of creating a timer, this code grabs the xtime_lock. if this happens while another CPU has already grabbed the xtime_lock and the ntp_lock, then the kernel livelocks. this is why John Stultz wrote the patch to avoid using hrtimers. This is what was causing everyone trouble today.
- line 598 - if ntp_start_leap_timer did not actually start a leap timer, set time_state to TIME_OK
- line 751 - assuming the kernel does not livelock, the stack is unwound and the ntp_lock spinlock is released.
There are a couple interesting things here.
First, line 691 cancels the existing timer every time adjtimex(2) is called. Then, 554 re-creates that timer. This means each time ntpd ran its clock loop filter, the buggy code was invoked.
Therefore I believe Red Hat was wrong when they said that once ntpd had set the leap-second flag, the system would not crash. I believe each system running ntpd had the potential to livelock every 17 minutes (or more) for the 24-hour period before the leap-second. I believe this may also explain why so many systems crashed; a one-time chance of crashing would be much less likely to hit as compared to 3 chances an hour.
UPDATE: In Red Hat's KB solution at https://access.redhat.com/knowledge/solutions/154713 , Red Hat engineers did come to the same conclusion (that running ntpd would continuously hit the buggy code). And indeed they did so several hours before I did. This solution wasn't linked to the main article at https://access.redhat.com/knowledge/articles/15145 , so I didn't notice it until now.
Second, this explains why loaded systems were more likely to crash. Loaded systems will be handling more interrupts, causing the "do_tick" kernel function to be called more often, giving more of a chance for this code to run and grab the ntp_lock while the timer was being created.
Third, is there a chance of the system crashing when the leap-second actually occurs? I don't know for sure, but possibly yes, because the timer that fires and actually executes the leap-second adjustment (ntp_leap_second, on line 388) also grabs the ntp_lock spinlock, and has a call to hrtimer_add_expires_ns. I don't know if that call might also be able to cause a livelock, but it doesn't seem impossible.
Finally, what causes the leap-second flag to be disabled after the leap-second has run? The answer there is ntpd stops setting the leap-second flag at some point after midnight when it calls adjtimex(2). Since the flag isn't set, the check on line 554 will not be true, and no timer will be created, and line 598 will reset the time_state global variable to TIME_OK. This explains why if you checked the flag with adjtimex(8) just after the leap second, you would still see the leap-second flag set.
In short, the best advice for today seems to be the first I gave after all: disable ntpd, and disable the leap-second flag.
And some final thoughts:
- none of the Linux vendors noticed John Stultz's patch and applied it to their kernels :(
- why didn't John Stultz alert some of the vendors this was needed? perhaps the chance of the livelock seemed low enough making noise wasn't warranted.
- I've heard reports of Java processes locking up or spinning when the leap-second was applied. Perhaps we should follow Google's lead and rethink how we apply leap-seconds to our systems: http://googleblog.blogspot.com/2011/09/time-technology-and-leaping-seconds.html
06/02 Update from John Stultz:
https://lkml.org/lkml/2012/7/1/203
The post contained a step-by-step walk-through of why the leap second caused the futex timers to expire prematurely and continuously, spiking the CPU load.
1. Excluding /var/run
As you already noticed, excluding /var/run
during a complete restore of a CentOS 6 system causes problems, because it also excludes directories created by installed packages. Excluding /var/lock
can also cause similar problems, because some packages create subdirectories there too.
(There may be no such issues on more recent Linux distributions which use systemd
— on such distributions /var/lock
and /var/run
(really /run
) may be placed on tmpfs
, and any required subdirectories are created during every boot; however, CentOS 6 is much older and does not have any support for automatic creation of subdirectories in /var/lock
or /var/run
.)
However, actually excluding /var/run
and /var/lock
is not needed for a proper restore, because the /etc/rc.d/rc.sysinit
script on CentOS 6 includes the following command:
find /var/lock /var/run ! -type d -exec rm -f {} \;
This command will remove all stale lock or pid files (or any other non-directory files, such as sockets and symlinks) during the system boot. Therefore you should remove /var/lock
and /var/run
from the restore exclusion list.
2. Location of network configuration files
You already exclude /etc/sysconfig/network*
when restoring the backup; this should match both the /etc/sysconfig/network
file (global networking configuration) and the /etc/sysconfig/network-scripts
directory (per-interface configuration files ifcfg-*
). However, these files are used only by the old-style network configuration scripts included in the initscripts
package, and CentOS 6 has another network configuration system — NetworkManager, configuration for which is stored in /etc/NetworkManager
. Try also excluding that directory when you restore the backup.
3. The issue with symbolic links replaced with files
If you see that symbolic links were replaced with plain files after the restore, this means that either your backup/restore program was not configured correctly, or (if there is no option for saving and restoring actual symlinks) the program you used is not suitable for Linux system backup/restore at all. You can get away with a program which does not support symlinks only if the program is used to backup and restore only some specific data which definitely will not contain symlinks. Note that you may find symlinks in places where you did not expect them — e.g., in some cases symlinks may be used in MySQL database directories (to store some parts of data on a different device), therefore relying on the “no symlinks” assumption may be dangerous.
4. MySQL backup
If your backup program simply copies files from a running server, your backup is not really “crash consistent“, because different files (and even different blocks of a same file) are copied at different times, therefore you will not actually get a consistent snapshot of the database in your backup. (This applies to any kind of database, not just MySQL.)
There are several ways to backup MySQL databases using just a file-level backup:
Use mysqldump
to create a SQL dump before starting the file-level backup; backup the dump file instead of the database directory. This is the most portable backup format, but both dumping and restoring may be slow.
Stop the MySQL server before starting the backup, make a file-level backup, then start the MySQL server again. To restore, just restore all files on the new server, then start the server normally. This kind of backup is fast, but requires a significant downtime during the backup.
To reduce the MySQL server downtime required by the previous method, you can create a filesystem snapshot after stopping the server, then start the MySQL server again, and then mount the snapshot, perform a file level backup and delete the snapshot. You need to have the filesystem on an LVM volume with some free space in the volume group for the snapshot.
To reduce the downtime even further, you can use FLUSH TABLES WITH READ LOCK
before taking the snapshot instead of stopping the server, as described here; in this case the snapshot will contain MyISAM tables in a consistent state, and InnoDB tables in a crash-consistent state (InnoDB recovery will be needed after a file level restore).
Read this documentation for more information about MySQL backup.
Best Answer
Here's what I've done (this assumes a single disk, at /dev/sda)
use dd to backup the MBR and partition table: "dd bs=512 count=1 if=/dev/sda of=/backups/sda.layout"
use rsync to copy the entire thing with something like: "rsync -axvPH --numeric-ids ..."
On restore I do this:
boot the target machine with sysrescuecd, I will typically have the 'sda.layout' file on a USB stick.
restore the MBR/partition table with dd: "dd bs=512 count=1 if=/path/to/sda.layout of=/dev/sda"
Use partprobe (thanks commenter Mark) to get the kernel to re-read the partition table.
Mount all the various partions under /restore/. I make the mount points identical under restore, so if I have /boot, /var on my source, I end up with /restore/boot, /restore/var, etc.
use rsync to restore the entire thing.