So it's been a few days and I still can not connect to my new HVM instance with EC2 running Ubuntu 16. For reference, I am trying to upgrade our server from an m3 instance running Ubuntu 16, to a C5 instance running Ubuntu 16. For almost every method I've tried, I am able to get to the point where I stop my new C5 instance, detach all volumes, and attach the newly updated source volume as /dev/sda1
, but then when I go to connect to the instance, I always end up timing out. Amazon's status check also fails, as it says the instance is unreachable. However, the system log shows no issues when starting up.
I've tried doing everything in this post. I've tried this post as well. I've looked on other sites, and have given this and this a try. I've even tried both the ec2 command line tools method, and converting an AMI from the ec2 console (online), however I either cannot launch a C5 instance with the converted AMI, or the instance will stop and fail (in the case of conversion via command line).
The only thing I can really think of that might be causing it, is the naming convention for the partitions on the C5 instance. Every single guide I've seen uses xvda/xvdf/xvdg
. I could be wrong, but I do not have these partitions or disks, and instead have nvme0n1
, nvme0n1p1
, (the new HVM root), nvme1n1
, and nvme1n1p1
. When I tried the HVM / source / target disk method, I had nvme0n1/nvme0n1p1
, nvme1n1
(target — where everything should end up), and nvme2n1/nvme2n1p1
(source — where everything was from, on m3). I found this Amazon post about nvme so I don't think this should be an issue, as I'm just using the correct disk / partition when using /mnt/
, ie. I call mkdir -p /mnt/target && mount /dev/nvme1n1 /mnt/target
instead of mkdir -p /mnt/target && mount /dev/xvdf /mnt/target
, but nothing so far has worked. My instance becomes unreachable the moment I attach the target
as /dev/sda1
.
So, is there something that I'm missing when doing these with a disk named nvme*
? Is there any other information or debug things I can provide to help understand the issue?
Best Answer
I realize that this question wasn't seen very much, but just in case, I'm hoping my results can help out someone in the future (maybe even me the next time I attempt this). I would like to thank Steve E. from Amazon support for helping me get my instance migrated <3
Anyways, there were 2 issues when migrating my Ubuntu 16.04 M3 (PV) instance to an Ubuntu 16.04 C5 (HVM) instance. The first issue was that the new C5 instances do use the new naming conventions, so other tutorials about migrating PV to HVM don't work quite the same way. The other issue was that my M3 (PV) instance had been through upgrades to Ubuntu. I actually had gone from Ubuntu 12 -> Ubuntu 14 -> Ubuntu 16 in the past year or so. This caused an issue where cloud network files weren't generated, and so my instance could not be reached.
Anyways to migrate an Ubuntu 16.04 PV instance to an HVM instance using the new nvme naming convention do the following:
Pre-Requisites Summary:
Before starting, make sure to install the following on your PV instance:
/dev/sdf
(on the Ubuntu system, the name will benvme1n1
)./dev/sdg
(on the Ubuntu system, the name will benvme2n1
)Migration:
Once logged into your instance, use
sudo su
to execute all commands as a root user.Display your volumes
nvme0n1
is the HVM root you just created (just to boot this time)nvme1n1
is the PV root restored (will be converted to HVM)nvme2n1
is the blank volume (will receive the conversion from the PV root asnvme1n1
)Create a new partition on
nvme2n1
(nvme2n1p1
will be created)Check the 'source' volume and minimize the size of original filesystem to speed up the process. We do not want to copy free disk space in the next step.
Duplicate 'source' to 'destination' volume
Resize the 'destination' volume to maximum:
Prepare the destination volume:
chroot
to the new volumeReinstall grub on the chrooted volume:
Exit the
chroot
Shutdown the instance
After the conversion you need now to do this:
Detach the 3 volumes you previous had on the HVM instance. Attach the last volume you created (blank) as
/dev/sda1
on the console (it was previously attached as/dev/nvme2n1
) on the HVM instance. Start the HVM instance.The new HVM instance should now boot successfully and will be an exact copy of the old source PV instance (if you used the correct source volume). Once you have confirmed that everything is working, the source instance can be terminated.
Updating network configuration (optional)
Now, the steps above will work for a majority of the people here. However, my instance status was still not being reached. The reason was because I upgraded Ubuntu on my instance, instead of starting from a fresh image. This left the
eth0
config activated, an no50-cloud-init.cfg
config file.If you already have the file
/etc/network/interfaces.d/50-cloud-init.cfg
, then you can follow along and update the file, instead of creating a new one. Also assume all commands are run viasudo su
.Shutdown the instance, detach volumes, and enter the same configuration as before. Attach the 8GB volume as
/dev/sda1/
, and your final destination volume as/dev/sdf/
. Start the instance up and login.Mount
/dev/sdf
, which should now benvme1n1p1
by doing the following:Either create or update the file:
With the following:
Exit
chroot
(exit
), shut down the instance (shutdown -h now
).Follow step 9 from before!
You should be done!