Sorry I see that there was an error in the lsinitrd command in 4.4.2, inerting the "-f" position.
Here the screenshot that shows anyway no filter active:
https://drive.google.com/file/d/19VmgvsHU2DhJCRzCbO9K_Xyr70x4BqXX/view?usp=sharing

Gianluca


On Sat, Oct 3, 2020 at 6:26 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer <abawer@redhat.com> wrote:
From the info it seems that startup panics because gluster bricks cannot be mounted.


Yes, it is so
This is a testbed NUC I use for testing.
It has 2 disks, the one named sdb is where ovirt node has been installed. The one named sda is where I configured gluster though the wizard, configuring the 3 volumes for engine, vm, data

The filter that you do have in the 4.4.2 screenshot should correspond to your root pv,
you can confirm that by doing (replace the pv-uuid with the one from your filter):

#udevadm info  /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ  
P: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2
N: sda2
S: disk/by-id/ata-QEMU_HARDDISK_QM00003-part2
S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ

In this case sda2 is the partition of the root-lv shown by lsblk.

Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no special file created of type /dev/disk/by-id/....
See here for udevadm command on 4.4.0 that shows sdb3 that is the partition corresponding to PV of root disk
https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=sharing



Can you give the output of lsblk on your node?

Here lsblk as seen by 4.4.0 with gluster volumes on sda:
https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=sharing

ANd here lsblk as seen from 4.4.2 with an empty sda:
https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=sharing


Can you check that the same filter is in initramfs?
# lsinitrd -f  /etc/lvm/lvm.conf | grep filter

Here the command from 4.4.0 that shows no filter
https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=sharing

And here from 4.4.2 emergency mode, where I have to use the path /boot/ovirt-node-ng-4.4.2-0..../initramfs-....
because no initrd file in /boot (in screenshot you also see output of "ll /boot)
https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=sharing



We have the following tool on the hosts
# vdsm-tool config-lvm-filter -y
it only sets the filter for local lvm devices, this is run as part of deployment and upgrade when done from
the engine.

If you have other volumes which have to be mounted as part of your startup
then you should add their uuids to the filter as well.

I didn't anything special in 4.4.0: I installed node on the intended disk, that was seen as sdb and then through the single node hci wizard I configured the gluster volumes on sda

Any suggestion on what to do on 4.4.2 initrd or running correct dracut command from 4.4.0 to correct initramfs of 4.4.2?

BTW: could in the mean time if necessary also boot from 4.4.0 and let it go with engine in 4.4.2?

Thanks,
Gianluca