Simply restoring the HDD will not be enough, you're probably will want your boot record too which I hardly believe exists in your backup (am I wrong?, it's better for you if i do!)...
Lest assume you got the server to the point it can boot (i personally prefer creating the additional partition mounted to /boot
which will have kernel
and initrd
with busybox
or something similar to allow you basic maintenance tasks). You can also use a live CD of your Linux distribution.
Mount your future root partition
somewhere and restore your backup.
tar
was created for tapes so it support appending to archive files with same name. If you used this method just untar -xvpf backup.tar -C /mnt
if not you'll need to restore "last sunday" backup and applying deferential parts up to needed day.
You should keep in mind that there is a lot of stuff that you should not backup, things like: /proc
, /dev
, /sys
, /media
, /mnt
(and probably some more which depend on your needs).
You'll need to take care of it before creating backup, or it may became severe pain while in restore process!
There is many points that you can easily miss with that backup method for whole server:
- commands used to restore may vary to much depend on real commands you used to backup your data.
- boot record
- kernel image and modules are ok and match each other after restore
- ignore unwanted stuff on backup not restore.
- etc, etc...
Some good points on that exact method can be found on Ubuntu Wiki:BackupYourSystem/TAR. Look for Restoring.
BTW:
- have you ever tried to restore one of your backups?
- have you considered changing your backup strategy?
- have you considered separation of data you need to backup and system settings (there is some good stuff today to manage system configuration so it can be easily resorted with zero pain like
puppet
or chief
, so only thing you should care about is real data)
P.P.S
I recommend reading couple of Jeff Atwood posts about backups http://www.codinghorror.com/blog/2008/01/whats-your-backup-strategy.html and http://www.codinghorror.com/blog/2009/12/international-backup-awareness-day.html
This is caused by a livelock when ntpd calls adjtimex(2) to tell the kernel to insert a leap second. See lkml posting http://lkml.indiana.edu/hypermail/linux/kernel/1203.1/04598.html
Red Hat should also be updating their KB article as well. https://access.redhat.com/knowledge/articles/15145
UPDATE: Red Hat has a second KB article just for this issue here: https://access.redhat.com/knowledge/solutions/154713 - the previous article is for an earlier, unrelated problem
The work-around is to just turn off ntpd. If ntpd already issued the adjtimex(2) call, you may need to disable ntpd and reboot to be 100% safe.
This affects RHEL 6 and other distros running newer kernels (newer than approx 2.6.26), but not RHEL 5.
The reason this is occurring before the leap second is actually scheduled to occur is that ntpd lets the kernel handle the leap second at midnight, but needs to alert the kernel to insert the leap second before midnight. ntpd therefore calls adjtimex(2) sometime during the day of the leap second, at which point this bug is triggered.
If you have adjtimex(8) installed, you can use this script to determine if flag 16 is set. Flag 16 is "inserting leap second":
adjtimex -p | perl -p -e 'undef $_, next unless m/status: (\d+)/; (16 & $1) && print "leap second flag is set:\n"'
UPDATE:
Red Hat has updated their KB article to note: "RHEL 6 customers may be affected by a known issue that causes NMI Watchdog to detect a hang when receiving the NTP leapsecond announcement. This issue is being addressed in a timely manner. If your systems received the leapsecond announcement and did not experience this issue, then they are no longer affected."
UPDATE: The above language was removed from the Red Hat article; and a second KB solution was added detailing the adjtimex(2) crash issue: https://access.redhat.com/knowledge/solutions/154713
However, the code change in the LKML post by IBM Engineer John Stultz notes there may also be a deadlock when the leap second is actually applied, so you may want to disable the leap second by rebooting or using adjtimex(8) after disabling ntpd.
FINAL UPDATE:
Well, I'm no kernel dev, but I reviewed John Stultz's patch again here: https://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=6b43ae8a619d17c4935c3320d2ef9e92bdeed05d
If I'm reading it right this time, I was wrong about there being another deadlock when the leap second is applied. That seems to be Red Hat's opinion as well, based on their KB entry. However, if you have disabled ntpd, keep it disabled for another 10 minutes, so that you don't hit the deadlock when ntpd calls adjtimex(2).
We'll find out if there are any more bugs soon :)
POST-LEAP SECOND UPDATE:
I spent the last few hours reading through the ntpd and pre-patch (buggy) kernel code, and while I may be very wrong here, I'll attempt to explain what I think was going on:
First, ntpd calls adjtimex(2) all the time. It does this as part of its "clock loop filter", defined in local_clock in ntp_loopfilter.c. You can see that code here: http://www.opensource.apple.com/source/ntp/ntp-70/ntpd/ntp_loopfilter.c (from ntp version 4.2.6).
The clock loop filter runs quite often -- it runs every time ntpd polls its upstream servers, which by default is every 17 minutes or more. The relevant bit of the clock loop filter is:
if (sys_leap == LEAP_ADDSECOND)
ntv.status |= STA_INS;
And then:
ntp_adjtime(&ntv)
In other words, on days when there's a leap second, ntpd sets the "STA_INS" flag and calls adjtimex(2) (via its portability-wrapper).
That system call makes its way to the kernel. Here's the relevant kernel code: https://github.com/mirrors/linux/blob/a078c6d0e6288fad6d83fb6d5edd91ddb7b6ab33/kernel/time/ntp.c
The kernel codepath is roughly this:
- line 663 - start of do_adjtimex routine.
- line 691 - cancel any existing leap-second timer.
- line 709 - grab the ntp_lock spinlock (this lock is involved in the possible livelock crash)
- line 724 - call process_adjtimex_modes.
- line 616 - call process_adj_status.
- line 590 - set time_status global variable, based on flags set in adjtimex(2) call
- line 592 - check time_state global variable. in most cases, call ntp_start_leap_timer.
- line 554 - check time_status global variable. STA_INS will be set, so set time_state to TIME_INS and call hrtimer_start (another kernel function) to start the leap second timer. in the process of creating a timer, this code grabs the xtime_lock. if this happens while another CPU has already grabbed the xtime_lock and the ntp_lock, then the kernel livelocks. this is why John Stultz wrote the patch to avoid using hrtimers. This is what was causing everyone trouble today.
- line 598 - if ntp_start_leap_timer did not actually start a leap timer, set time_state to TIME_OK
- line 751 - assuming the kernel does not livelock, the stack is unwound and the ntp_lock spinlock is released.
There are a couple interesting things here.
First, line 691 cancels the existing timer every time adjtimex(2) is called. Then, 554 re-creates that timer. This means each time ntpd ran its clock loop filter, the buggy code was invoked.
Therefore I believe Red Hat was wrong when they said that once ntpd had set the leap-second flag, the system would not crash. I believe each system running ntpd had the potential to livelock every 17 minutes (or more) for the 24-hour period before the leap-second. I believe this may also explain why so many systems crashed; a one-time chance of crashing would be much less likely to hit as compared to 3 chances an hour.
UPDATE: In Red Hat's KB solution at https://access.redhat.com/knowledge/solutions/154713 , Red Hat engineers did come to the same conclusion (that running ntpd would continuously hit the buggy code). And indeed they did so several hours before I did. This solution wasn't linked to the main article at https://access.redhat.com/knowledge/articles/15145 , so I didn't notice it until now.
Second, this explains why loaded systems were more likely to crash. Loaded systems will be handling more interrupts, causing the "do_tick" kernel function to be called more often, giving more of a chance for this code to run and grab the ntp_lock while the timer was being created.
Third, is there a chance of the system crashing when the leap-second actually occurs? I don't know for sure, but possibly yes, because the timer that fires and actually executes the leap-second adjustment (ntp_leap_second, on line 388) also grabs the ntp_lock spinlock, and has a call to hrtimer_add_expires_ns. I don't know if that call might also be able to cause a livelock, but it doesn't seem impossible.
Finally, what causes the leap-second flag to be disabled after the leap-second has run? The answer there is ntpd stops setting the leap-second flag at some point after midnight when it calls adjtimex(2). Since the flag isn't set, the check on line 554 will not be true, and no timer will be created, and line 598 will reset the time_state global variable to TIME_OK. This explains why if you checked the flag with adjtimex(8) just after the leap second, you would still see the leap-second flag set.
In short, the best advice for today seems to be the first I gave after all: disable ntpd, and disable the leap-second flag.
And some final thoughts:
- none of the Linux vendors noticed John Stultz's patch and applied it to their kernels :(
- why didn't John Stultz alert some of the vendors this was needed? perhaps the chance of the livelock seemed low enough making noise wasn't warranted.
- I've heard reports of Java processes locking up or spinning when the leap-second was applied. Perhaps we should follow Google's lead and rethink how we apply leap-seconds to our systems: http://googleblog.blogspot.com/2011/09/time-technology-and-leaping-seconds.html
06/02 Update from John Stultz:
https://lkml.org/lkml/2012/7/1/203
The post contained a step-by-step walk-through of why the leap second caused the futex timers to expire prematurely and continuously, spiking the CPU load.
Best Answer
In terms of hardware compatibility, if your server does not require proprietary drivers you should be fine. New CPU cores will be detected. One way to find out is to run Debian LiveCD on your server and see what is detected and what is not. With regards to migration you have few options :
- Set up your new server from scratch.
This would probably be most time consuming but a good way to revise your setup for a new environment e.g remove unnecessary packages ( GUI or other desktop packages etc.. ), harden security.
- RSYNC / Copy
Cumbersome but will require least downtime if you need to have you existing server up and running and don't want to set up from scratch.
replicate partition layout to mirror your existing system
sfdisk -d /dev/sda | sfdisk /dev/sdb
sda is your exisitng server sdb is your new server
create filesystem / swap on your new drive partitions and mount root, boot and any other partitions from your new drive on your existing system.
copy contents of your existing system to the new drive
/mnt/NEW is the mountpoint of root (/) from the new drive.
repeat for /boot and other partitions if there are any
Setup grub on your new drive.
Run 'grub' and :
You might have to modify grub menu.cfg file to update root partition if LABEL is used.
Commands will vary depending on your partitions layout or if you have raid/LVM etc.. This should leave you with a ready to boot system. If there were changes on your current system while you were doing rsync and you want them to appear on your new system, shut down your system with both drives (current and new ) plugged in and boot into Live CD (SystemRescueCD is great), mount root partitions from both and re-run rsync commands. This should only copy the difference and take little time. Make sure you are copying in the right direction old->new drive.
- DD / Clone
Best option in terms of perfect and easiest migration. This will leave you with an identical copy of you existing system but will require downtime.
Boot your PC with both drives plugged into Live CD (SystemRescueCD is great) and run DD
NOTE : Make sure /dev/sdb is your NEW EMPTY drive. This will take time depending on the size of your disk but when complete your new drive will be ready to boot and will be an identical copy of you current system. Ofcourse your new drive needs to be of the same / larger size.
Your NIC naming will change on the new system, just modify /etc/udev/rules.d/70-persistent-net.rules file and rename as required.
Good luck.