What you did should work, though I've not done it this way previously myself. As you are using a virtualisation solution you could do what I do to expand LVM based VMs: add a new virtual disk and add that to your LVM configuration - this way you don't have to mess about with partitioning at all.
Once you add the drive and restart the VM you'll see a new harddrive listed under /dev
(such as /dev/sdb
). Mark this as a physical volume (pvcreate /dev/sdb
) and add it to your volume group (vgextend VolGroupName /dev/sdb
).
Now you have a larger volume group you can expand the logical volumes into the new space with lvresize
and expand the filesystems into the extra romm created in the volumes (with resize2fs
for ext2/3/4, other tools for other formats).
Why not make your life easier and use LVM for the extension?
If I was on your place, I would backup the data from the server (20 GB are not much at all), then:
umount <respective mount point>
lvremove media
vgremove media
pvremove /dev/sda4
pvremove /dev/sda5
Partition /dev/sda
:
fdisk /dev/sda
p -> print
d 5 -> delete /dev/sda5
d 4 -> delete /dev/sda4
d 3 -> delete /dev/sda3
d 2 -> delete /dev/sda2
p -> print to confirm your changes
n -> create new partition, take the defaults to acquire the max disk space possible for it, choose primary partition (LVM will manage it afterwards)
t -> change the type of the partition to LVM
w
As /dev/sda1
is in use the changes will be visible after reboot. Then fdisk -l /dev/sda
will output:
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002948a
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 499711 248832 83 Linux
/dev/sda2 499712 20969471 10233857 8e Linux LVM
Add /dev/sda2
to LVM, create volume group and partition:
pvcreate /dev/sda2
vgcreate media /dev/sda2
lvcreate --size 14G --name root media
lvcreate --size 1G --name swap_1 media (in my experience `--extents` is more precise than `--size`. Verify by `vgdisplay` there are no Free Extents)
Create filesystems and enable swap for the newly created logical volumes.
Advantage of this setup: flexibility. The logical volume size may be smaller than the volume group, hence the filesystem will be smaller too. Then to increase the size, use lvextend
and increase the filesystem afterwards.
Disadvantage: have to delete all partitions, backup and restore the data.
Best Answer
Actually in your link to the RedHat article step one says:
And in the link for the VMWare article, there is a note at the top:
So you can do either/or. The recommended way in almost all cases (as of RHEL7/GRUB2 -- sorry I can't speak for VMWare) is to NOT make a partition as you're then creating a scheme like below.
There was at one time alignment issues by not partitioning, which causes a performance penalty, however you can compensate for the missing partition in LVM for systems that supposedly require one.
The recommendation to partition used to be based on the fact that other OS's won't be able to read the LVM metadata and since there is no partition, will show the disk as being unformatted instead of showing it as having a partition. Actually, even in Linux, if there is no partition, the disk appears to be unused to all of the partitionin tools (fdisk, gdisk, parted, etc). That is because they are designed to look for partitions.
If you're running VMware, I'm assuming you're working with a corporate environment where there are controls in place -- SA's won't have any reason to be monkeying with partitions, and you would never have Windows nor any "other OS" installed onto the machine unless the machine was being repurposed. Therefore, the recommendation to partition does not apply.
The best practice in older versions of RHEL was to partition: https://unix.stackexchange.com/questions/76588/what-is-the-best-practice-for-adding-disks-in-lvm
The revised recommendation of using the whole disk is of course new to RHEL 7, as older systems use GRUB instead of GRUB2. On those older systems the reason to keep partitioning is due to /boot needing to be on a physical disk.
In your case, you're not referring to the OS disks, so even in older systems you can safely keep the LVM metadata directly on the raw disk without any partition.
There is one case where you will want to use a protective MBR, and that is when you are using a LVM on a physical drive in a guest OS, instead of using a vmdk or other type of file. But even as noted by @shodanshok you can use the filter in lvm.conf on the hypervisor to hide those.
If this is a SAN backed physical disk, there is a discussion here: https://access.redhat.com/discussions/1488023
Oracle DBAs also recommend using the raw disk: http://www.dba-oracle.com/real_application_clusters_rac_grid/raw_devices_linux.html
Finally, there is a discussion about this on Reddit as well: https://www.reddit.com/r/sysadmin/comments/292qf2/lvm_physical_disk_vs_partitions/
Pretty much everyone agrees, use the whole disk.
Further reading about booting from an LVM backed disk, if you're interested. https://unix.stackexchange.com/questions/136614/how-does-grub2-load-the-kernel-from-an-lvm-volume
http://forums.fedoraforum.org/showthread.php?t=263325
So to summarize: Use the whole disk. One day, partitioning tools could possibly phased out of use in *nix entirely, in favor of things like zfs tools, btrfs tools, lvm, or some combination of the three.