quickreference:lvm2
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
quickreference:lvm2 [2020/01/24 18:18] – rodolico | quickreference:lvm2 [2024/12/07 13:36] (current) – [Too Much Information] rodolico | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== LVM2 Quick Reference ====== | ====== LVM2 Quick Reference ====== | ||
+ | |||
+ | ===== Overview ===== | ||
+ | |||
+ | LVM (Logical Volume Manager) allows you to take a portion of a block level device and use it as a repository for other block level devices. The most common example of a "block level device" | ||
+ | |||
+ | Basically, you set aside a block of storage and let it be controlled by LVM instead of the normal hard drive access. The block of storage can be the entire disk, a partition, or even an LVM block (yes, you can have LVM inside of LVM, and it is quite useful at times). | ||
+ | |||
+ | In the following example, we'll take a RAID set (named md0) and use the entire device as an LVM pv " | ||
+ | |||
+ | <code bash> | ||
+ | |||
+ | At this point, you can create a vg (Volume Group) using the pv. We'll call it " | ||
+ | |||
+ | <code bash> | ||
+ | |||
+ | Now, you have a volume group " | ||
+ | |||
+ | <code bash> | ||
+ | |||
+ | At this point, testing is an unformated, unpartitioned block level device (hard drive). So, you can do anything you would normally do with it. | ||
+ | |||
+ | <code bash> | ||
+ | fdisk / | ||
+ | mkfs.ext4 -m 0 -L testing / | ||
+ | </ | ||
+ | |||
+ | ===== Make it Stop ===== | ||
+ | |||
+ | Well, lvm really, really wants to make sure your stuff is there, even it the underlying volume is renamed, or even reformatted. Even after you remove the lv's, the vg's and the pv's, sometimes you still can not get it to do anything | ||
+ | |||
+ | <code bash> | ||
+ | # sometimes helpful to remove the dm's first, but not always necessary | ||
+ | dmsetup ls | ||
+ | dmsetup remove < | ||
+ | # now, tell it to release the lv's. If you do not specify a path | ||
+ | # releases everything | ||
+ | lvchange -an < | ||
+ | vgchange -an < | ||
+ | </ | ||
===== Too Much Information ===== | ===== Too Much Information ===== | ||
Line 6: | Line 45: | ||
<code python> | <code python> | ||
+ | # only check RAID volumes (/dev/md0), ignore everything else | ||
filter=[ " | filter=[ " | ||
+ | # only check /dev/sda, nothing else | ||
+ | filter = [ " | ||
+ | # Finally, accept sda, sdb, any md, then ignore anything else | ||
+ | filter = [ " | ||
+ | </ | ||
+ | |||
+ | To determine which drives are actually local, you can use the **mount** command. For example: | ||
+ | <code bash> | ||
+ | mount | grep ' | ||
+ | </ | ||
+ | will show all mounted drives which are local, ignoring nfs, tmpfs, etc... | ||
+ | |||
+ | After making the change, rescan LVM with one of the following commands | ||
+ | <code bash> | ||
+ | # yes, start, not reload, not restart, start | ||
+ | vgscan | ||
+ | # OR | ||
+ | service lvm2 start | ||
+ | # OR | ||
+ | lvm vgchange -aay --sysinit | ||
</ | </ | ||
One example is a virtual server (Xen) which has Logical Volumes exported to running virtual. In some cases, those " | One example is a virtual server (Xen) which has Logical Volumes exported to running virtual. In some cases, those " | ||
- | ===== Resize a Physical Volumne ===== | ||
- | If you expand a RAID set, or increase the size of an iSCSI share, you have to grow your Physical Volume (PV) in order to use the additional space. Following assumes your physical volume is /dev/md0 | ||
+ | ===== Reduce the size of an Logical Volume (LV) with ext2/3/4 under it ===== | ||
+ | |||
+ | | ||
+ | |||
+ | For most file systems, you must unmount it first. If you're wanting to resize /, you'll need to boot from a CD to do this. Now, do your work. | ||
+ | <code bash> | ||
+ | # umount the file system first! Very important | ||
+ | # | ||
+ | # You MUST do an fsck on the file system | ||
+ | e2fsck -f / | ||
+ | # resize the file system to the minimum size possible | ||
+ | # The -p means "show me feedback" | ||
+ | resize2fs -pM / | ||
+ | # now, reduce the actual partition size. | ||
+ | # It MUST be larger than the file system size | ||
+ | # the -L ###G tells it what the new size will be | ||
+ | lvreduce -L 200G / | ||
+ | # grow the file system to the maximum of the partition | ||
+ | resize2fs / | ||
+ | # check the file system again | ||
+ | e2fsck -f / | ||
+ | </ | ||
+ | You can now remount the system. | ||
+ | |||
+ | This was based on an article at https:// | ||
+ | |||
+ | ===== Increase the size of a Logical Volume (LV) with ext2/3/4 under it ===== | ||
+ | |||
+ | Sometimes you need more space in the logical volume you create. In this case, you must unmount the volume, grow the partition (LV) size, then grow the file system under it. This is the reverse of the above section. | ||
+ | |||
+ | We'll just use resize2fs to grow the file system, and lvextend to grow the partition | ||
+ | |||
+ | <code bash> | ||
+ | # umount the file system first! Very important | ||
+ | # | ||
+ | # You MUST do an fsck on the file system | ||
+ | e2fsck -f / | ||
+ | # now, increase the actual partition size. | ||
+ | # the -L ###G tells it what the new size will be | ||
+ | lvextend -L 200G / | ||
+ | # grow the file system to the maximum of the partition | ||
+ | resize2fs / | ||
+ | # check the file system again | ||
+ | e2fsck -f / | ||
+ | </ | ||
+ | |||
+ | Note: lvextend allows you some shorthand to grow, based on the size of the Volume Group (VG) or the amount that is available. In this case, you use the lower case " | ||
+ | |||
+ | +###% followed by the key VG, LV, PVS, FREE or ORIGIN | ||
+ | |||
+ | Thus, if we wanted to extend the LV to 75% of the amount of free space on the VG, you would say: | ||
+ | |||
+ | <code bash> | ||
+ | |||
+ | According to the documentation (man page) using lvextend without the -L or -l parameters is equivilent to: | ||
+ | |||
+ | <code bash> | ||
+ | lvextend / | ||
+ | # same as | ||
+ | lvextend -l +100%PVS / | ||
+ | </ | ||
+ | meaning it will try to take the entire size of the Physical Volume (PV) (/ | ||
+ | |||
+ | ===== Testing a change with ability to revert ===== | ||
+ | |||
+ | Problem: You are using LVM as an underlying structure. You want to make changes, but are not sure if the changes will cause problems. Solution is to create a snapshot of the LV, make your modifications, | ||
+ | |||
+ | In this, I will assume volume group vg0 has a logical volume virt has a size of 10G, and the changes will probably only be about 1 Gig. What that contains is irrelevant. | ||
+ | - Shut down any processes accessing the logical volume. This is not required, but, for example, if you create the snapshot of a partion on a running virtual, it is the equivilent of turning the power off on a running program; possible data loss if you recover. | ||
+ | - Create snapshot with the command< | ||
+ | - The size of only a gig is useful to keep the amount of space needed to a minimum. You can, of course, make the snapshot the same as the original. The caveat is, the size of the snapshot must be able to contain all changes in the original system | ||
+ | - I gave the snapshot a name (-n parameter). If you do not do this, a name will be generated by LVMl | ||
+ | - You can view how " | ||
+ | - Make your changes. For example, if your snapshot is of a Xen virtual, start the virtual back up and make the system changes. | ||
+ | - If you have problems with the existing system, revert to the original | ||
+ | - Shut down any processes accessing the original volume (ie, shut down a xen virtual, or unmount a partition) | ||
+ | - Revert the original with the command< | ||
+ | - The parameter is the path to the snapshot. This will take all changes to the original and put them back in, then it will automatically delete the snapshot. | ||
+ | - If you had no problems and want the machine in the new state permanently, | ||
+ | - be sure you don't delete the wrong one, that is why I precede snapshots with " | ||
+ | - Please note: running an LV with a snapshot decreases efficiency. Many (most?) writes to the original generate a write to the snapshot, so you are decreasing disk access speed greatly. Don't leave spare snapshots laying around past the time you need them, | ||
+ | |||
+ | |||
+ | ===== Enlarging Physical Volume ===== | ||
+ | |||
+ | Problem: You are using LVM as an underlying file structure on a virtual, and you need more room. Solution is to simply grow the underlying container, then use pvresize to let lvm2 know about it. | ||
+ | |||
+ | ==== System ==== | ||
+ | * Xen hvm | ||
+ | * Physical volume for DOMU (hvm) is an LVM partition exported as a device (full disk). We'll call this / | ||
+ | * One or more partitions on the exported device has been set as a pv (Physical volume) from within the DOM. Yes, we're talking about an LVM running on top of an LVM. We'll call this /dev/xvdb1, and it is from the DOMU's perspective. | ||
+ | * The partition used for the pv is the last one in the exported device. If this is NOT the case, stop now and figure out something else. | ||
+ | |||
+ | ==== Solution ==== | ||
+ | - Shut down the DOMU | ||
+ | - Increase LV to the size you want. This sets it to 100G< | ||
+ | - Manually edit partition table to " | ||
+ | - <code bash> | ||
+ | - use the " | ||
+ | - use the " | ||
+ | - use the " | ||
+ | - Use the same partition number | ||
+ | - Use the same starting cylinder/ | ||
+ | - Let the ending cylinder/ | ||
+ | - use the " | ||
+ | - use the " | ||
+ | - Start the DOMU and log into it | ||
+ | - vgs # so we can see what we had | ||
+ | - pvresize /dev/xvdb1 | ||
+ | - vgs # you should now see the additional 50G available to your DOMU | ||
+ | |||
+ | ===== Swapping Physical Volumes ===== | ||
+ | |||
+ | Common LVM Recipes Problem: You want to swap out an existing Physical Volume for a new one. Maybe a faster drive, or a better RAID set. To do this is a multi-step process, but it can be done with no downtime, though it will definitely put a strain on the drive subsystem, especially if you are moving from one software RAID set to another. | ||
+ | |||
+ | Following example assumes you have an existing Volume Group that consists of one hardware RAID partition (sdb1), and you want to move it to a software RAID partition (md0). I assume you know how to build the RAID array. To replace /dev/sdb1 with /dev/md0, you must first add md0 to the volume group, then move all data from the volume group to it, then remove sdb1 from the volume group. Note: the pvmove must move all allocated extents from sdb1 to md0 without having any downtime on the server. Thus, it takes a long time (hours, maybe days). However, if it dies in the middle, you can simply restart it with the same command. | ||
<code bash> | <code bash> | ||
- | # look at current | + | # First, set a device to become a new physical volume. NOTE: This must be capable of holding all USED data in the current |
- | pvs | + | pvcreate /dev/md0 |
- | # resize | + | # Now, add it to the volume group (in this case, datalvm) |
- | pvresize | + | vgextend |
- | # look at size and note the increase | + | # And move all data from the old pv to the new one |
- | pvs | + | pvmove -v /dev/sdb1 |
+ | # once done (and it can take hours), remove | ||
+ | vgreduce datalvm /dev/sdb1 | ||
+ | # and, if you want, remove the PV flag from the volume. | ||
+ | pvremove /dev/sdb1 | ||
</ | </ | ||
===== Links ===== | ===== Links ===== | ||
* [[https:// | * [[https:// |
quickreference/lvm2.1579911522.txt.gz · Last modified: 2020/01/24 18:18 by rodolico