Linux – Unable to resize ec2 ebs root volume

amazon ec2linuxpartition

I have followed many of the tutorials that pretty much all say the same thing which is basically:

  1. Stop the instance
  2. Detach the volume
  3. Create a snapshot of the volume
  4. Create a bigger volume from the snapshot
  5. Attach the new volume to the instance
  6. Start the instance back up
  7. Run resize2fs /dev/xxx

However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is:

  1. Delete all partitons
  2. Recreate them back to what they were except with the bigger sizes
  3. Reboot the instance and run resize2fs

(I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem.

Can anybody shed some light on what I could be doing wrong?


Edit

On my new volume of 20GB with the 6GB image,df -h says:

Filesystem            Size  Used Avail Use% Mounted on
/dev/xvde1            5.8G  877M  4.7G  16% /
tmpfs                 836M     0  836M   0% /dev/shm 

And fdisk -l /dev/xvde says:

Disk /dev/xvde: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x7d833f39

    Device Boot      Start         End      Blocks   Id  System
/dev/xvde1               1         766     6144000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/xvde2             766         784      146432   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.

Also, sudo resize2fs /dev/xvde1 says:

resize2fs 1.41.12 (17-May-2010)
The filesystem is already 1536000 blocks long.  Nothing to do!

Best Answer

I would take the safer route. And it looks like you are already doing things about as complicated. Attach the old volume to an instance launched with Amazon Linux or Ubuntu or whatever you are comfortable using. Mount it read/only. Then, create a new volume of the larger size you need. Attach it to the same instance at another device letter. Format it using the same label (or same UUID if your fstab mounts by UUID ... no real need for that in AWS but that is no assurance it isn't being done). Mount it read-write. Copy the file tree from the old volume to the new volume. You can use cp or rsync for that.

You may need to take other steps to make it bootable if you are not using PVGRUB "bootloader kernels" to load your real kernel from the volume.

Be sure to make a snapshot of the old volume before this, and a snapshot of the new volume after this.

Related Topic