I want to tell Ubuntu to use the larger space but I can't use resize2fs to increase the partition size and fdisk doesn't let me change the partition size either.
Instead it only lets me delete the partition, create a new one at the larger size and then I have to Rsync the files across with the correct command line to also copy hardlinks. That's a lot of work, so I recently set it up using LVM and now I can take the new, larger EBS disk and easily increase the LVM volume on it, then a quick resize2fs to tell the EXT4 filesystem that it's got some new space and bam, problem solved without having to copy hundreds of gigabytes of data.
LVM is a saviour. Alternatively I could just mount another EBS volume extend the LVM to that and now it's spread over multiple disks but it's seen as only one partition, sweet! The main benefit from using LVM is if you have more than one harddrive. With LVM you can group the hardrives into one huge one. Also you can add more space to this group if you add more harddrives.
With LVM you can simple work like you have only one single huge harddrive. Despite that LVM supports a lot expert features. What kind of applications or content are you planning to host? If it is a personal server or something for a small organization, you probably can get by without using LVM. LVMs are useful if you need partitions etc across multiple disks. I doubt you would need it, given that you're asking here regarding it :.
Without answering your question directly which the other posters already did , there is an easy answer about whether or not you need LVM: If you don't know some feature during installation in detail, leave it at its default setting. The default configuration will be fine for most users including me and probably you. Ubuntu Community Ask! Sign up to join this community. The best answers are voted up and rise to the top.
Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Learn more. What is LVM and what is it used for? But might as well standardize on the more flexible LVM. Improve this answer. John Mahowald John Mahowald Yes, the flexibility it potential offers is worth a lot even if you don't actually need it.
I often encountered systems with awkward, inflexible partitioning schemes that ran into problems that could be easily fixed if the were just using LVM.
You say it is easy to do lvextend --resize. But where does the new space come from? Case1: a new block device. I am unsure whether this is a good idea. If one block device has trouble, then the whole filesystem has troubl. Case2: the block device as extended. In this case I don't need LVM.
After the block device was increased, I can increase the filesystem no need for these three layers - see ascii art in question. Admittedly, some distros don't leave VGs with free space by default. But default isn't for everyone, and thinking about capacity planning is still required. They exist and function. And are optional, you can have each VG on only one PV. Added avoiding partitioning, and multiple disk use case. Until you can't HBruijn HBruijn Sum1sAdmin Sum1sAdmin 1, 1 1 gold badge 10 10 silver badges 19 19 bronze badges.
I agree : there is no point in using lvm on top of VMWare since you can resize regular partitions online without downtime. Anon Anon 1, 9 9 silver badges 23 23 bronze badges. Yes, maybe the previous admin used LVM to handle future use cases. But looking at my current situation it does not provide any benefit. In an extreme case, LVM can be used to replace a failing disk on a running system. BillThor BillThor I sort of remember that happening to me, but it's been some 15 to 20 years I'm thinking back to.
Which can be compensated for by setting such filesystems for "news" use cases, but Easier to just not carve up the system so fine in the first place. I use Ubuntu Server. I use an encrypted LVM, and I have been using one for the past five years or more.
I tend to use it everywhere unless I have some particular reason not to, which I rarely do. LVM on the laptop and desktop because it's the path of least resistance for full-drive encryption. ZFS on the server. Sunner wrote: I tend to use it everywhere unless I have some particular reason not to, which I rarely do. I think the decision is, will the disk hardware change on this host during its lifetime and might I need to expand some volume?
And second, will I need to resize volumes without re-deploying the OS? So in general I use LVM sometimes. I think in this "cloud-native" world, LVM will play less and less of a role, since you are less likely to keep server around for a long time without rebuilding them completely anyway. LVM actively annoys the hell out of me in VMs, eg with opensuse default installs. And I have all that added complexity to maintain afterwards, for the rest of the life of the VM.
It's a partition on the vdisk for default installations, not the whole vdisk, so it's still pretty problematic. Resize the vdisk, grow the partition, resize the pv, expand the lv, then expand the filesystem. Especially if it's a glacially old opensuse that doesn't support resize on a mounted filesystem, so now I have to do all this crap offline with the vdisk mounted loopback AND all the rest of it.
On non-trivial VM servers I do one vdisk per partition, so you can resize disk at hypervisor level and then fs at Linux guest level. I use lvm everywhere whenever possible, but the way it's typically deployed by linux installers is sub-optimal. On my workstation I do a lot with virtualization, so I use the free space in my volume group to create storage space to use as raw partitions for my virtual machines, which I have found to be generally better performing for VM guests than file-based storage.
I can then use lvm snapshots with file systems and operating systems that don't natively support them like windows and NTFS. With windows in particular, I can create a snapshot, so a dangerous system update, and have a restore point I can use if things go haywire with the VM.
I can also create a snapshot to use as a source for a block-by-block file system backup that is guaranteed to be in a consistent and sane state. I have used free space in the volume group to expand the available storage space for my windows gaming VM on several occasions, which is a really nice benefit to using lvm for virtual machine storage, so it combines the flexibility of growable file-based storage, whatever snapshots libvirt supports with the file-based formats, with better overall performance.
And a side note about expanding volumes, with SSD storage becoming the norm, it's less important for optimal performance that logical volumes' extents be contiguously allocated on the physical storage with rotating storage, this is done to try and reduce the number of seeks, which aren't a concern with SSDs. Also, lvm is SSD aware, so it can be configured to issue discards to physical ssds when deleting logical volumes. Lvm really doesn't introduce any additional complexity, as long as you take the time to learn the few simple commands and concepts used to administer it.
I guess contigous partitions would be more likely to recover data from versus LVMs, especially with non contiguous physical extents. Not in any version I've ever seen, although I'm pretty sure we have always picked the "set up LVM and use the entire space" option.
Edit, yeah this is from I frequently see 40GiB root partitions running out of inodes due to kernel installs, if the partition wasn't formatted for "news" usage type. For more verbose, human-readable output, the pvdisplay command is usually a better option:.
As you can see the pvdisplay command is often the easiest command for getting detailed information about physical volumes.
To discover the logical extents that have been mapped to each volume, pass in the -m option to pvdisplay :. This can be very useful when trying to determine which data is held on which physical disk for management purposes.
The vgscan command can be used to scan the system for available volume groups. It also rebuilds the cache file when necessary. It is a good command to use when you are importing a volume group into a new system:. The command does not output very much information, but it should be able to find every available volume group on the system.
To display more information, the vgs and vgdisplay commands are available. Like its physical volume counterpart, the vgs command is versatile and can display a large amount of information in a variety of formats. Because its output can be manipulated easily, it is frequently used in when scripting or automation is needed. For example, some helpful output modifications are to show the physical devices and the logical volume path:. For more verbose, human-readable output, the vgdisplay command is a usually the best choice.
Adding the -v flag also provides information about the physical volumes the volume group is built upon, and the logical volumes that were created using the volume group:. The vgdisplay command is useful because it can tie together information about many different elements of the LVM stack. As with the other LVM components, the lvscan option scans the system and outputs minimal information about the logical volumes it finds:.
For more complete information, the lvs command is flexible, powerful, and easy to use in in scripts:. To find out about the number of stripes and the logical volume type, use the --segments option:. When the -m flag is added, the tool will also display information about how the logical volume is broken down and distributed:. This information is useful if you need to remove that underlying device and wish to move the data off to specific locations.
This section will discuss how to create and expand physical volumes, volume groups, and logical volumes. In order to use storage devices with LVM, they must first be marked as a physical volume.
This specifies that LVM can use the device within a volume group. First, use the lvmdiskscan command to find all block devices that LVM can see and use:.
Warning : Make sure that you double-check that the devices you intend to use with LVM do not have any important data already written to them. Using these devices within LVM will overwrite the current contents. If you already have important data on your server, make backups before proceeding.
To mark the storage devices as LVM physical volumes, use pvcreate. You can pass in multiple devices at once:. To create a new volume group from LVM physical volumes, use the vgcreate command. You will have to provide a volume group name, followed by at least one LVM physical volume:. This example will create your volume group with a single initial physical volume. Usually you will only need a single volume group per server. All LVM-managed storage can be added to that pool and then logical volumes can be allocated from that.
One reason you may wish to have more than one volume group is if you feel you need to use different extent sizes for different volumes. Usually you will not have to set the extent size the default size of 4M is adequate for most uses , but if you need to, you can do so upon volume group creation by passing the -s option:.
0コメント