On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer <abawer@redhat.com> wrote:
From the info it seems that startup panics because gluster bricks cannot be mounted.


Yes, it is so
This is a testbed NUC I use for testing.
It has 2 disks, the one named sdb is where ovirt node has been installed. The one named sda is where I configured gluster though the wizard, configuring the 3 volumes for engine, vm, data

The filter that you do have in the 4.4.2 screenshot should correspond to your root pv,
you can confirm that by doing (replace the pv-uuid with the one from your filter):

#udevadm info  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ  
P: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
N: sda2
S: disk/by-id/ata-QEMU_HARDDISK_QM00003-part2
S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ

In this case sda2 is the partition of the root-lv shown by lsblk.

Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no special file created of type /dev/disk/by-id/....
What does "udevadm info" show for /dev/sdb3 on 4.4.2?
 
See here for udevadm command on 4.4.0 that shows sdb3 that is the partition corresponding to PV of root disk



Can you give the output of lsblk on your node?

Here lsblk as seen by 4.4.0 with gluster volumes on sda:

ANd here lsblk as seen from 4.4.2 with an empty sda:


Can you check that the same filter is in initramfs?
# lsinitrd -f  /etc/lvm/lvm.conf | grep filter

Here the command from 4.4.0 that shows no filter

And here from 4.4.2 emergency mode, where I have to use the path /boot/ovirt-node-ng-4.4.2-0..../initramfs-....
because no initrd file in /boot (in screenshot you also see output of "ll /boot)



We have the following tool on the hosts
# vdsm-tool config-lvm-filter -y
it only sets the filter for local lvm devices, this is run as part of deployment and upgrade when done from
the engine.

If you have other volumes which have to be mounted as part of your startup
then you should add their uuids to the filter as well.

I didn't anything special in 4.4.0: I installed node on the intended disk, that was seen as sdb and then through the single node hci wizard I configured the gluster volumes on sda

Any suggestion on what to do on 4.4.2 initrd or running correct dracut command from 4.4.0 to correct initramfs of 4.4.2?
The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see what needs to be fixed in this case.


BTW: could in the mean time if necessary also boot from 4.4.0 and let it go with engine in 4.4.2?
Might work, probably not too tested.

For the gluster bricks being filtered out in 4.4.2, this seems like [1].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805

 

Thanks,
Gianluca