
On April 7, 2020 10:45:18 AM GMT+03:00, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hi, I have configured a single host HCI environment through the GUI wizard in 4.3.9. Initial setup has thai layout of disks, as seen by the operating system: /dev/sda -> for ovirt-node-ng OS /dev/nvme0n1 --> for gluster, engine and data volumes /dev/nvme1n1 --> for gluster, vmstore volume
So far so good and all is ok. I notice that, even with single path internal disks, at the end oVirt configures the gluster disks as multipath devices and LVM2 PV structure on top of the multipath devices. Is this for "code optimization" at low level or what is the rationale for that, as with Gluster normally you do use local disks and so single path? Multipath structure generated:
[root@ovirt ~]# multipath -l nvme.8086-50484b53373530353031325233373541474e-494e54454c205353 dm-5 NVME,INTEL SSDPED1K375GA size=349G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 0:0:1:0 nvme0n1 259:0 active undef running eui.01000000010000005cd2e4b5e7db4d51 dm-6 NVME,INTEL SSDPEDKX040T7
size=932G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 2:0:1:0 nvme1n1 259:2 active undef running [root@ovirt ~]#
Anyway, on top of the multipah devices
On /dev/nvme0n1: gluster_vg_nvme0n1 volume group with gluster_lv_data and gluster_lv_engine
On /dev/nvme1n1 gluster_vg_nvme1n1 volume group with gluster_lv_vmstore logical volume
The problem arises when I add another nvme disk, that, occupying a PCI slot, it seems has always higher priority of the previous /dev/nvme1n1 disk and so takes its name.
After booting the node:
old nvme0n1 --> unmodified name old nvme1n1 --> becomes nvme2n1 new disk --> gets name nvme1n1
From a funcional point of view I have no problems apart LVM warnings I send below and also because the xfs entries in fstab are with UUID:
UUID=fa5dd3cb-aeef-470e-b982-432ac896d87a /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0 UUID=43bed7de-66b1-491d-8055-5b4ef9b0482f /gluster_bricks/data xfs inode64,noatime,nodiratime 0 0 UUID=b81a491c-0a4c-4c11-89d8-9db7fe82888e /gluster_bricks/vmstore xfs inode64,noatime,nodiratime 0 0
lvs commands get:
[root@ovirt ~]# lvs WARNING: Not using device /dev/nvme0n1 for PV eYfuXw-yaPd-cMUE-0dnA-tVON-uZ9g-5x4BDp. WARNING: Not using device /dev/nvme2n1 for PV O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl. WARNING: PV eYfuXw-yaPd-cMUE-0dnA-tVON-uZ9g-5x4BDp prefers device /dev/mapper/nvme.8086-50484b53373530353031325233373541474e-494e54454c20535344504544314b3337354741-00000001 because device is used by LV. WARNING: PV O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl prefers device /dev/mapper/eui.01000000010000005cd2e4e359284f51 because device is used by LV. LV VG Attr LSize Pool Origin ...
Or, for the old nvme1n1 disk, now nvme2n1 multipath device:
[root@ovirt ~]# pvdisplay /dev/mapper/eui.01000000010000005cd2e4e359284f51 WARNING: Not using device /dev/nvme0n1 for PV eYfuXw-yaPd-cMUE-0dnA-tVON-uZ9g-5x4BDp. WARNING: Not using device /dev/nvme2n1 for PV O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl. WARNING: PV eYfuXw-yaPd-cMUE-0dnA-tVON-uZ9g-5x4BDp prefers device /dev/mapper/nvme.8086-50484b53373530353031325233373541474e-494e54454c20535344504544314b3337354741-00000001 because device is used by LV. WARNING: PV O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl prefers device /dev/mapper/eui.01000000010000005cd2e4e359284f51 because device is used by LV. --- Physical volume --- PV Name /dev/mapper/eui.01000000010000005cd2e4e359284f51 VG Name gluster_vg_nvme1n1 PV Size 931.51 GiB / not usable 1.71 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 238467 Free PE 0 Allocated PE 238467 PV UUID O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl
[root@ovirt ~]#
I'm able to create PV on top of the new multipath device detected by the system (see the nvme1n1 of the underlying disk):
eui.01000000010000005cd2e4b5e7db4d51 dm-6 NVME,INTEL SSDPEDKX040T7
size=3.6T features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=0 status=active `- 1:0:1:0 nvme1n1 259:1 active undef running
[root@ovirt ~]# pvcreate --dataalignment 256K /dev/mapper/eui.01000000010000005cd2e4b5e7db4d51 WARNING: Not using device /dev/nvme0n1 for PV eYfuXw-yaPd-cMUE-0dnA-tVON-uZ9g-5x4BDp. WARNING: Not using device /dev/nvme2n1 for PV O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl. WARNING: PV eYfuXw-yaPd-cMUE-0dnA-tVON-uZ9g-5x4BDp prefers device /dev/mapper/nvme.8086-50484b53373530353031325233373541474e-494e54454c20535344504544314b3337354741-00000001 because device is used by LV. WARNING: PV O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl prefers device /dev/mapper/eui.01000000010000005cd2e4e359284f51 because device is used by LV. Physical volume "/dev/mapper/eui.01000000010000005cd2e4b5e7db4d51" successfully created. [root@ovirt ~]#
But then I'm unable to create a VG on top of it:
[root@ovirt ~]# vgcreate gluster_vg_4t /dev/mapper/eui.01000000010000005cd2e4b5e7db4d51 WARNING: Not using device /dev/nvme0n1 for PV eYfuXw-yaPd-cMUE-0dnA-tVON-uZ9g-5x4BDp. WARNING: Not using device /dev/nvme1n1 for PV 56ON99-hFFP-cGpZ-g4MX-GfjW-jXeE-fKZVG9. WARNING: Not using device /dev/nvme2n1 for PV O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl. WARNING: PV eYfuXw-yaPd-cMUE-0dnA-tVON-uZ9g-5x4BDp prefers device /dev/mapper/nvme.8086-50484b53373530353031325233373541474e-494e54454c20535344504544314b3337354741-00000001 because device is used by LV. WARNING: PV 56ON99-hFFP-cGpZ-g4MX-GfjW-jXeE-fKZVG9 prefers device /dev/mapper/eui.01000000010000005cd2e4b5e7db4d51 because device is in dm subsystem. WARNING: PV O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl prefers device /dev/mapper/eui.01000000010000005cd2e4e359284f51 because device is used by LV. WARNING: Not using device /dev/nvme0n1 for PV eYfuXw-yaPd-cMUE-0dnA-tVON-uZ9g-5x4BDp. WARNING: Not using device /dev/nvme1n1 for PV 56ON99-hFFP-cGpZ-g4MX-GfjW-jXeE-fKZVG9. WARNING: Not using device /dev/nvme2n1 for PV O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl. WARNING: PV eYfuXw-yaPd-cMUE-0dnA-tVON-uZ9g-5x4BDp prefers device /dev/mapper/nvme.8086-50484b53373530353031325233373541474e-494e54454c20535344504544314b3337354741-00000001 because of previous preference. WARNING: PV 56ON99-hFFP-cGpZ-g4MX-GfjW-jXeE-fKZVG9 prefers device /dev/mapper/eui.01000000010000005cd2e4b5e7db4d51 because of previous preference. WARNING: PV O43LFq-46Gc-RRgS-Sk1F-5mFZ-Qw4n-oxXgJl prefers device /dev/mapper/eui.01000000010000005cd2e4e359284f51 because of previous preference. Cannot use device /dev/mapper/eui.01000000010000005cd2e4b5e7db4d51 with duplicates. [root@ovirt ~]#
the same with the "-f" option.
I suspect I can solve the problem filtering out /dev/nvme* devices in lvm.conf, but I'm not sure. The OS disk is seen as sda so it should not have problems with this Something like this:
filter = [ "r|/dev/nvme|", "a|.*/|" ]
And also I am not sure if I have to rebuild the initrd or not in this case and if so what would be the exact sequence of commands to execute.
Any suggestions?
Thanks in advance, Gianluca
The simplest way would be to say that 'blacklisting everything in multipath.conf' will solve your problems. In reality it is a little bit more complicated. You got some options in comparison with other OS-es (Windows) :) 1. The /dev/nvme*** are not persistent names , so forget about those. You can create udev rules for your NVMes in order to guarantee their names. For example you can use the following in order to find the serial: /lib/udev/scsi_id -g -u -x -d /dev/nvme0n1 Then you can use: ATTRS{serial}=='some string' Note: '=' is assignement, while '==' means it is equal. 2. You can tell lvm to use preferred names like: /dev/disk/by-id/dm-uuid-mpath-<WWID> (which is the persistent name of the mpath device) If that doesn't work, you can just filter everything with nvme like this: 'r|/dev/nvme|' I would go with udev rules, but I've used LVM preferred names also. If you change the LVM config, make a backup of your workig initramfs and then use : dracut -f If it boots (after a reboot) without issuues, you can rebuild all images via: dracut -f --regenerate-all Best Regards, Strahil Nikolov