My instance on Google Compute Engine is not booting up properly, which I am unable to SSH it anyways. I have a lot of stuff on the instance. How can I recover that?
Logs are as following. When I try if it is on network from Windows I get the nat IP but I am unable to SSH which was working fine. Neither can I SSH from the browser.
[ 0.519999] md: autorun ...
[ 0.520794] md: ... autorun DONE.
[ 0.521761] VFS: Cannot open root device "sda1" or unknown-block(0,0): error -6
[ 0.523744] Please append a correct "root=" boot option; here are the available partitions:
[ 0.525886] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
[ 0.527829] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.19.0-25-generic #26~14.04.1-Ubuntu
[ 0.529875] Hardware name: Google Google, BIOS Google 01/01/2011
[ 1.656059] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
Best Answer
During the migration from trial to paid user, I lost my running instance with similar symptoms. However, and in my case, the "flag" auto-delete the disk when deleting the instance was checked which was preventing me from using the method described above. So here's how I was able to recover my drive:
First and foremost, do not delete your corrupted instance. You will need it.
$ sudo blkid /dev/sda1: LABEL="cloudimg-rootfs" UUID="87f65d22-c9a9-428c-b1ab-b4ad9f8e4c05" TYPE="ext4" /dev/sdb1: LABEL="cloudimg-rootfs" UUID="87f65d22-c9a9-428c-b1ab-b4ad9f8e4c05" TYPE="ext4"
Here's how it looked on my: dashboard
In my case, to my biggest surprise the kernel booted from the recovered drive (gmap-server) and I was back in business. I have no idea how the kernel picked this one versus the one created at the creation of the instance. If anyone knows, please chime in here.