This is a fairly straightforward process, so I’ve created a quick reference that can hopefully benefit someone else.
First, why use the LVM? The advantages are clear: It gives you the ability to resize partitions dynamically and span them across multiple block devices. Additionally, you can take snapshots, but I’ll save that for a future post.
These are our storage layers, in ascending order of abstractness:
- Physical Disk
- Disk Partition (Primary and Logical)
- Physical Volume
- Volume Group
- Logical Volume
Here’s a quick summary of what we’re going to do:
- Increase the size of the physical disk.
- Create new logical partitions from that unallocated disk space.
- Created new Physical Volumes in the LVM from those partitions.
- Extend the existing LVM Volume Group by adding the new Physical Volumes to it.
- Extend the size of the Logical Volumes that are mapped to certain filesystems, from unallocated space in the Volume Group.
- Grow the actual filesystems themselves.
So, suppose you have a VM or a VPS named thor with a single 5GB disk. You originally set up LVM through the OS installer and created partitions through the installation wizard (or used kickstarter), called the volume group “thor,” and gave each logical volume (which looked a lot like a partition in the installer) a friendly label like “home,” “var,” etc. However, you’ve allocated much of the space to the /home partition, but now you’re getting nervous about /var filling up. Here’s the current status of things:
# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 124M 0 124M 0% /lib/init/rw udev 120M 128K 120M 1% /dev tmpfs 124M 0 124M 0% /dev/shm /dev/sda1 228M 15M 202M 7% /boot /dev/mapper/thor-root 322M 139M 167M 46% / /dev/mapper/thor-home 1.6G 37M 1.5G 3% /home /dev/mapper/thor-tmp 124M 13K 118M 1% /tmp /dev/mapper/thor-usr 1.7G 603M 979M 39% /usr /dev/mapper/thor-var 843M 484M 317M 61% /var
# swapon -s Filename Type Size Used Priority /dev/dm-1 partition 241656 12 -1
# ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 59 Oct 11 23:13 control lrwxrwxrwx 1 root root 7 Oct 11 23:13 thor-home -> ../dm-5 lrwxrwxrwx 1 root root 7 Oct 11 23:13 thor-root -> ../dm-0 lrwxrwxrwx 1 root root 7 Oct 11 23:13 thor-swap_1 -> ../dm-1 lrwxrwxrwx 1 root root 7 Oct 11 23:13 thor-tmp -> ../dm-4 lrwxrwxrwx 1 root root 7 Oct 11 23:13 thor-usr -> ../dm-2 lrwxrwxrwx 1 root root 7 Oct 11 23:20 thor-var -> ../dm-3
# fdisk -l
Disk /dev/sda: 4.76 GB, 16106127360 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 32 248832 83 Linux /dev/sda2 32 1958 15476756 5 Extended /dev/sda5 32 653 4990976 8e Linux LVM
Output from fdisk has been snipped to show just the pertinent information.
As you can see, /boot is a physical disk partition (although this is no longer necessary as the kernel can now boot from an LVM partition), while /, /home, /tmp, /usr, and /var look like some sort of LVM thing. This is correct: the dm devices stand for ‘device mapper’.
From an LVM perspective, here’s what we have:
# pvdisplay --- Physical volume --- PV Name /dev/sda5 VG Name thor PV Size 4.76 GiB / not usable 1.81 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 1218 Free PE 0 Allocated PE 1218 PV UUID SjKAWG-ak9K-1Qf3-0Jmd-8xvY-SJcL-mN4QW5
# vgdisplay --- Volume group --- VG Name thor System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 9 VG Access read/write VG Status resizable MAX LV 0 Cur LV 6 Open LV 6 Max PV 0 Cur PV 1 Act PV 1 VG Size 4.76 GiB PE Size 4.00 MiB Total PE 1218 Alloc PE / Size 1218 / 4.76 GiB Free PE / Size 0 / 0 VG UUID OWp6UN-8ssJ-nPTG-3MUA-v29T-PJfR-wNp6nR
Even if you don’t understand everything above, you can make it out pretty clearly that I have less than 5GiB of space total on the disk.
To remedy the situation, you think an extra 10GB should be plenty to cover /var, add a little extra swap, and expand capacity for increased anticipated usage of /home. So you must first either increase the physical disk size in the host’s VM management software, or ask your VPS provider to do it for you (alternatively, you could simply add a second disk, but this creates more clutter since it requires an extra file on the host). After a reboot, you see:
# fdisk -l
Disk /dev/sda: 16.1 GB, 16106127360 bytes
Great, the disk is bigger. That only completes step one, however. We need a larger partition. Since you can’t increase a physical partition size while it’s mounted (the process involves deleting it and recreating it as a larger size, which we can’t do since our OS is running on it), let’s just create a new one (don’t worry, the LVM will solve the problem of how to extend the existing filesystems onto this new partition). For this, I used cfdisk which uses a simple ncurses interface, so I have no output to paste below. In this case, I decided to create two additional 5GB Logical Partitions, so that I would have three Physical Volumes all of approximately equal size.
I now have:
# pvscan PV /dev/sda5 VG thor lvm2 [4.76 GiB / 0 free] Total: 1 [4.76 GiB] / in use: 1 [4.76 GiB] / in no VG: 0 [0 ]
Wait, where are my new physical volumes? We haven’t created them yet. We have new logical partitions, but not LVM PVs yet. So let’s fix that.
# pvcreate /dev/sda6 Physical volume "/dev/sda6" successfully created
# pvcreate /dev/sda7 Physical volume "/dev/sda7" successfully created
Now we have them, but they haven’t yet been assigned to a Volume Group:
# pvscan PV /dev/sda5 VG thor lvm2 [4.76 GiB / 0 free] PV /dev/sda6 lvm2 [4.76 GiB] PV /dev/sda7 lvm2 [5.24 GiB] Total: 3 [14.76 GiB] / in use: 1 [4.76 GiB] / in no VG: 2 [10.00 GiB]
Simple enough, we will extend the Volume Group:
# vgextend thor /dev/sda6 Volume group "thor" successfully extended
# vgextend thor /dev/sda7 Volume group "thor" successfully extended
Now we have a Volume Group that spans all of the Physical Volumes (and logical partitions):
# vgdisplay thor --- Volume group --- VG Name thor System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 11 VG Access read/write VG Status resizable MAX LV 0 Cur LV 6 Open LV 6 Max PV 0 Cur PV 3 Act PV 3 VG Size 14.75 GiB PE Size 4.00 MiB Total PE 3777 Alloc PE / Size 1218 / 4.76 GiB Free PE / Size 2559 / 10.00 GiB VG UUID OWp6UN-8ssJ-nPTG-3MUA-v29T-PJfR-wNp6nR
It’s kind of as if we took a few disks and spanned them in a JBOD array, and are now ready to present them to the OS as a single device.
So now we can extend the logical volumes to our desired size. Let’s give /var an extra 500GB:
# lvextend -L +500M /dev/mapper/thor-var Extending logical volume var to 1.32 GiB Logical volume var successfully resized
And to confirm:
# lvdisplay /dev/thor/var --- Logical volume --- LV Name /dev/thor/var VG Name thor LV UUID MJAjZB-EC1P-ODqE-d1KC-9jQG-3J8h-ZfVUyz LV Write Access read/write LV Status available # open 1 LV Size 1.32 GiB Current LE 339 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:3
Remember, this is /dev/dm-3, which has symlinks pointed to it at /dev/mapper/thor-var and also /dev/thor/var. Now, we’ve finally increased the size of the layer directly underneath the filesystem, so it’s time to resize the filesystem itself:
# resize2fs /dev/thor/var resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/thor/var is mounted on /var; on-line resizing required old desc_blocks = 1, new_desc_blocks = 1 Performing an on-line resize of /dev/thor/var to 347136 (4k) blocks. The filesystem on /dev/thor/var is now 347136 blocks long.
And we can confirm:
# df -h /dev/mapper/thor-var Filesystem Size Used Avail Use% Mounted on /dev/mapper/thor-var 1.4G 485M 785M 39% /var
One great benefit of adding all the new disk space to the LVM is that we can easily see how much disk space has been allocated to logical volumes, as well as how much is still unallocated (look towards the bottom):
# vgdisplay thor --- Volume group --- VG Name thor System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 12 VG Access read/write VG Status resizable MAX LV 0 Cur LV 6 Open LV 6 Max PV 0 Cur PV 3 Act PV 3 VG Size 14.75 GiB PE Size 4.00 MiB Total PE 3777 Alloc PE / Size 1343 / 5.25 GiB Free PE / Size 2434 / 9.51 GiB VG UUID OWp6UN-8ssJ-nPTG-3MUA-v29T-PJfR-wNp6nR
In old-fashioned terms, we’ve still got over 9GiB of disk space to allocate to other partitions. Said correctly, we have over 9GiB of unallocated space in the VG that can be allocated to LVs.
So we can now extend our swap logical volume and partition as well:
# lvextend -L +256M /dev/mapper/thor-swap_1 Extending logical volume swap_1 to 492.00 MiB Logical volume swap_1 successfully resized
# swapoff -v /dev/thor/swap_1 swapoff on /dev/thor/swap_1
# mkswap /dev/thor/swap_1 mkswap: /dev/thor/swap_1: warning: don't erase bootbits sectors on whole disk. Use -f to force. Setting up swapspace version 1, size = 503804 KiB no label, UUID=f60c0c9a-9ff5-4766-875e-4d9cb17b6891
# swapon -va swapon on /dev/mapper/thor-swap_1 swapon: /dev/mapper/thor-swap_1: found swap signature: version 1, page-size 4, same byte order swapon: /dev/mapper/thor-swap_1: pagesize=4096, swapsize=515899392, devsize=515899392
And for /home:
# lvextend -L +5G /dev/thor/home Extending logical volume home to 6.59 GiB Logical volume home successfully resized
# resize2fs /dev/thor/home resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/thor/home is mounted on /home; on-line resizing required old desc_blocks = 1, new_desc_blocks = 1 Performing an on-line resize of /dev/thor/home to 1726464 (4k) blocks. The filesystem on /dev/thor/home is now 1726464 blocks long.
Now, let’s confirm:
# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 124M 0 124M 0% /lib/init/rw udev 120M 140K 120M 1% /dev tmpfs 124M 0 124M 0% /dev/shm /dev/sda1 228M 15M 202M 7% /boot /dev/mapper/thor-root 322M 139M 167M 46% / /dev/mapper/thor-home 6.5G 38M 6.2G 1% /home /dev/mapper/thor-tmp 124M 13K 118M 1% /tmp /dev/mapper/thor-usr 1.7G 603M 979M 39% /usr /dev/mapper/thor-var 1.4G 485M 785M 39% /var
# swapon -s Filename Type Size Used Priority /dev/dm-1 partition 503800 1212 -1
Here are some of the benefits of handling this the way we did:
- Only one physical disk. On the host, this means only one virtual disk file.
- Only one reboot was necessary, and that’s because we changed the size of the physical disk. Without the LVM, this would have required booting into an alternate OS, and we would have been limited to expanding only the last partition on the disk (into the space to the right of it).
- We can continue to add as much space in as small chunks as we want to using the LVM.
- We can easily see disk allocation stats in vgdisplay, so we know if we can grow partitions without performing any calculations.
- We gain the ability to take LVM snapshots.
As usual, please let me know if I missed something in the comments.