Page 2 of 2

Re: Logical volume question

Posted: 2021/01/14 10:35:19
by MartinR
Trevor, if I remember XFS couldn't be "fixed" due to internal design. I'm trying to remember back to a course I did with SGI about 20 years ago and I'm fairly certain it is due to the internal allocation groups. I think that growing the FS is done by creating new allocation group(s), shrinking would require the removal of AGs and the associated structures, not just moving blocks and reducing the length. Of course this was under IRIX, not Linux so may have all changed in the intervening couple of decades.

Re: Logical volume question

Posted: 2021/01/15 19:50:17
by mjz
What is the value of xfs if you can't resize it down as well as up? I don't understand why the default file system creation of a logical volume in Centos8 is xfs. If a disk needs to be replaced, it's a pain (as I am going through right now).

I have to move everything off the entire logical volume. Then delete it and resize to be contained within one disk (which I am doing now).
Once done, I can then install my new drive and set it up - but it won't be XFS.

Re: Logical volume question

Posted: 2021/01/15 21:35:45
by jlehtone
mjz wrote:
2021/01/15 19:50:17
What is the value of xfs if you can't resize it down as well as up?
The "up" is apparently more interesting than "down" in enterprise: https://access.redhat.com/solutions/1532

Re: Logical volume question

Posted: 2021/01/15 21:41:03
by mjz
Interesting on size ...
But what if a disk needs to be replaced? Can't be done without removing the filesystem and rebuild the whole thing (unless I am missing something).

All I wanted to do was move the extants off the drive. Resize the volume to not include the drive's space and then remove the drive and replace it (with a bigger one) and then resize the volume back up.

Re: Logical volume question

Posted: 2021/01/15 22:04:21
by MartinR
On big systems the spindles ("disks" is ambiguous due to its use at hardware, RAID, logical and filesystem levels) are arranged in shelves of maybe a dozen or more. Several shelves are then linked up to at least two co-operating hardware RAID controllers. The RAIDsets are then collected together by a layer such as cluster-LVM or GPFS before being presented to the system as the appropriate logical volumes.

So, the answer to your question is that if a spindle fails the hot spare is brought into the RAIDset by the hardware, and the operations staff arrange for a replacement. When it arrives they pull out the old disk and put in the new one. Simples! :D

Re: Logical volume question

Posted: 2021/01/16 00:19:42
by mjz
Thanks Martin, I'm always learning from you all.
It makes sense then for big systems. But for dual node servers (non-raid), xfs does not make sense. I should have 'over-rode' the default and set up the disks as ext4. I'll follow Trevor's advice and clean things up and start again so I can replace disks more easily (non-raid). Data is fully backed up - so no issue there, just a lot of hours copying. 4TB at one gigabit/sec.

Re: Logical volume question

Posted: 2021/01/18 16:54:47
by mjz
One last question - (hopefully)

Below is my current configuration:

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 1.8T 0 part
├─cl-root 253:0 0 50G 0 lvm /
├─cl-swap 253:1 0 15.8G 0 lvm [SWAP]
└─cl-home 253:2 0 3.6T 0 lvm /home
sdb 8:16 0 1.8T 0 disk
└─sdb1 8:17 0 1.8T 0 part
└─cl-home 253:2 0 3.6T 0 lvm /home

--- Physical volume ---
PV Name /dev/sdb1
VG Name cl
PV Size <1.82 TiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 476931
Free PE 0
Allocated PE 476931
PV UUID FIRfBq-wFl4-X6Zn-AcKl-iMqm-faGZ-FJd3LJ

/etc/fstab contents:

#
# /etc/fstab
# Created by anaconda on Sun Feb 23 17:13:36 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root / xfs defaults 0 0
UUID=875d6fec-978b-4450-9fd6-66b96dd464fc /boot ext4 defaults 1 2
/dev/mapper/cl-home /home xfs defaults 0 0
/dev/mapper/cl-swap swap swap defaults 0 0
UUID=A228376928373B9B /mnt/data auto nosuid,nodev,nofail,x-gvfs-show 0 0

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

What I want to do is remove physical disk sdb which contains part of cl-home mounted as /home. I have removed /home contents.
I want to leave physical disk sda (which contains Centos8 OS) alone.

I noticed VG name is "CL". sda has cl-root, cl-swap and cl-home.

What command sequence do I use to remove only /home and subsequently remove sdb from my server?
My fear is if I vgreduce /dev/sdb1 it will affect my root partition ("cl ..." ?). And what line(s) do I delete from fstab for reboot
(/dev/mapper/cl-home ??)

Thanks for your help