oVirt Node 4.4.2 is now generally available

oVirt Node 4.4.2 is now generally available The oVirt project is pleased to announce the general availability of oVirt Node 4.4.2 , as of September 25th, 2020. This release completes the oVirt 4.4.2 release published on September 17th Important notes before you install / upgrade Please note that oVirt 4.4 only supports clusters and data centers with compatibility version 4.2 and above. If clusters or data centers are running with an older compatibility version, you need to upgrade them to at least 4.2 (4.3 is recommended). Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7 are no longer supported. For example, the megaraid_sas driver is removed. If you use Enterprise Linux 8 hosts you can try to provide the necessary drivers for the deprecated hardware using the DUD method (See the users’ mailing list thread on this at https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXEFJN... ) How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1 Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> - Host enter emergency mode after upgrading to latest build If you have your root file system on a multipath device on your hosts you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your host entering emergency mode. In order to prevent this be sure to upgrade oVirt Engine first, then on your hosts: 1. Remove the current lvm filter while still on 4.4.1, or in emergency mode (if rebooted). 2. Reboot. 3. Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2). 4. Run vdsm-tool config-lvm-filter to confirm there is a new filter in place. 5. Only if not using oVirt Node: - run "dracut --force --add multipath” to rebuild initramfs with the correct filter configuration 6. Reboot. Documentation - If you want to try oVirt as quickly as possible, follow the instructions on the Download <https://ovirt.org/download/> page. - For complete installation, administration, and usage instructions, see the oVirt Documentation <https://ovirt.org/documentation/>. - For upgrading from a previous version, see the oVirt Upgrade Guide <https://ovirt.org/documentation/upgrade_guide/>. - For a general overview of oVirt, see About oVirt <https://ovirt.org/community/about.html>. What’s new in oVirt Node 4.4.2 Release? oVirt Node has been updated, including: - oVirt 4.4.2: http://www.ovirt.org/release/4.4.2/ - Ansible 2.9.13: https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v2.9... - Glusterfs 7.7: https://docs.gluster.org/en/latest/release-notes/7.7/ - Advanced Virtualization 8.2.1 See the release notes [1] for installation instructions and a list of new features and bugs fixed. Additional resources: - Read more about the oVirt 4.4.2 release highlights: http://www.ovirt.org/release/4.4.2/ - Get more oVirt project updates on Twitter: https://twitter.com/ovirt - Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4.4.2/ [2] http://resources.ovirt.org/pub/ovirt-4.4/iso/ -- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.*

On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
oVirt Node 4.4.2 is now generally available
The oVirt project is pleased to announce the general availability of oVirt Node 4.4.2 , as of September 25th, 2020.
This release completes the oVirt 4.4.2 release published on September 17th
Thanks fir the news! How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> - Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode (if rebooted). 2.
Reboot. 3.
Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2). 4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in place. 5.
Only if not using oVirt Node: - run "dracut --force --add multipath” to rebuild initramfs with the correct filter configuration 6.
Reboot.
What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to follow the same steps as if I were in 4.4.1 or what? I would like to avoid going through 4.4.1 if possible. Thanks, Gianluca

Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
oVirt Node 4.4.2 is now generally available
The oVirt project is pleased to announce the general availability of oVirt Node 4.4.2 , as of September 25th, 2020.
This release completes the oVirt 4.4.2 release published on September 17th
Thanks fir the news!
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> - Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode (if rebooted). 2.
Reboot. 3.
Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2). 4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in place. 5.
Only if not using oVirt Node: - run "dracut --force --add multipath” to rebuild initramfs with the correct filter configuration 6.
Reboot.
What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to follow the same steps as if I were in 4.4.1 or what? I would like to avoid going through 4.4.1 if possible.
I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure should work for the same case. The problematic filter in /etc/lvm/lvm.conf looks like: # grep '^filter = ' /etc/lvm/lvm.conf filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
Thanks, Gianluca
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
oVirt Node 4.4.2 is now generally available
The oVirt project is pleased to announce the general availability of oVirt Node 4.4.2 , as of September 25th, 2020.
This release completes the oVirt 4.4.2 release published on September 17th
Thanks fir the news!
How to prevent hosts entering emergency mode after upgrade from oVirt
4.4.1
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> - Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode (if rebooted). 2.
Reboot. 3.
Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2). 4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in place. 5.
Only if not using oVirt Node: - run "dracut --force --add multipath” to rebuild initramfs with the correct filter configuration 6.
Reboot.
What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to follow the same steps as if I were in 4.4.1 or what? I would like to avoid going through 4.4.1 if possible.
I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure should work for the same case. The problematic filter in /etc/lvm/lvm.conf looks like:
# grep '^filter = ' /etc/lvm/lvm.conf filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
Thanks, Gianluca
OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0 and gluster wizard and never update until now. Updated self hosted engine to 4.4.2 without problems. My host doesn't have any filter or global_filter set up in lvm.conf in 4.4.0. So I update it: [root@ovirt01 vdsm]# yum update Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM CEST. Dependencies resolved. ==================================================================================================== Package Architecture Version Repository Size ==================================================================================================== Installing: ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8 Transaction Summary ==================================================================================================== Install 1 Package Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4 27% [===== ] 6.0 MB/s | 145 MB 01:45 ETA ---------------------------------------------------------------------------------------------------- Total 5.3 MB/s | 782 MB 02:28 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Installing : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Obsoleting : ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch 2/2 Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Verifying : ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch 2/2 Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch.rpm Installed: ovirt-node-ng-image-update-4.4.2-1.el8.noarch Complete! [root@ovirt01 vdsm]# sync [root@ovirt01 vdsm]# I reboot and I'm proposed 4.4.2 by default with 4.4.0 available too. But the default 4.4.2 goes into emergency mode and if I login, I see that it indeed has a filter inside lvm.conf. See the filter that the update has put in place...: https://drive.google.com/file/d/1LNZ_9c6HJnL3dbuwd5PMjb7wIuWAPDrg/view?usp=s... During boot I see this getting blocked: A start job is running for dev-disk-by\x2d-id ....... the same for apparently 3 disks ( I think the gluster volumes...) https://drive.google.com/file/d/1Yg2g5FyugfUO54E0y2JfLiabbIYXr_7f/view?usp=s... And at emergency mode: https://drive.google.com/file/d/1WNB0e54tw5AUTzaG_HRvrltN1-Zh_LTn/view?usp=s... if I login and then exit Reloading system manager configuration Starting default target and then stumped there. After some minutes I get confirmation that I am in emergency mode and give the password again and that I jocan only reboot or see journal log contents of output of "journalctl -xb" here: https://drive.google.com/file/d/1AB1heOaNyWlVMF5bQ5C67sRKMw-rLkvh/view?usp=s... I verified that I can safely boot in 4.4.0 in case.. What to do now? Thanks, Gianluca

From the info it seems that startup panics because gluster bricks cannot be mounted. The filter that you do have in the 4.4.2 screenshot should correspond to your root pv, you can confirm that by doing (replace the pv-uuid with the one from your filter): #udevadm info /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ P: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2 N: sda2 S: disk/by-id/ata-QEMU_HARDDISK_QM00003-part2 S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ In this case sda2 is the partition of the root-lv shown by lsblk. Can you give the output of lsblk on your node? Can you check that the same filter is in initramfs? # lsinitrd -f /etc/lvm/lvm.conf | grep filter We have the following tool on the hosts # vdsm-tool config-lvm-filter -y it only sets the filter for local lvm devices, this is run as part of deployment and upgrade when done from the engine. If you have other volumes which have to be mounted as part of your startup then you should add their uuids to the filter as well. On Sat, Oct 3, 2020 at 3:19 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
oVirt Node 4.4.2 is now generally available
The oVirt project is pleased to announce the general availability of oVirt Node 4.4.2 , as of September 25th, 2020.
This release completes the oVirt 4.4.2 release published on September 17th
Thanks fir the news!
How to prevent hosts entering emergency mode after upgrade from oVirt
4.4.1
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> - Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode (if rebooted). 2.
Reboot. 3.
Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2). 4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in place. 5.
Only if not using oVirt Node: - run "dracut --force --add multipath” to rebuild initramfs with the correct filter configuration 6.
Reboot.
What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to follow the same steps as if I were in 4.4.1 or what? I would like to avoid going through 4.4.1 if possible.
I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure should work for the same case. The problematic filter in /etc/lvm/lvm.conf looks like:
# grep '^filter = ' /etc/lvm/lvm.conf filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
Thanks, Gianluca
OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0 and gluster wizard and never update until now. Updated self hosted engine to 4.4.2 without problems.
My host doesn't have any filter or global_filter set up in lvm.conf in 4.4.0.
So I update it:
[root@ovirt01 vdsm]# yum update Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM CEST. Dependencies resolved.
==================================================================================================== Package Architecture Version Repository Size
==================================================================================================== Installing: ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
Transaction Summary
==================================================================================================== Install 1 Package
Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4 27% [===== ] 6.0 MB/s | 145 MB 01:45 ETA
---------------------------------------------------------------------------------------------------- Total 5.3 MB/s | 782 MB 02:28 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Installing : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Obsoleting : ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch 2/2 Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Verifying : ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch 2/2 Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch.rpm
Installed: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
Complete! [root@ovirt01 vdsm]# sync [root@ovirt01 vdsm]#
I reboot and I'm proposed 4.4.2 by default with 4.4.0 available too. But the default 4.4.2 goes into emergency mode and if I login, I see that it indeed has a filter inside lvm.conf. See the filter that the update has put in place...:
https://drive.google.com/file/d/1LNZ_9c6HJnL3dbuwd5PMjb7wIuWAPDrg/view?usp=s...
During boot I see this getting blocked:
A start job is running for dev-disk-by\x2d-id ....... the same for apparently 3 disks ( I think the gluster volumes...)
https://drive.google.com/file/d/1Yg2g5FyugfUO54E0y2JfLiabbIYXr_7f/view?usp=s...
And at emergency mode:
https://drive.google.com/file/d/1WNB0e54tw5AUTzaG_HRvrltN1-Zh_LTn/view?usp=s...
if I login and then exit
Reloading system manager configuration Starting default target
and then stumped there. After some minutes I get confirmation that I am in emergency mode and give the password again and that I jocan only reboot or see journal log
contents of output of "journalctl -xb" here:
https://drive.google.com/file/d/1AB1heOaNyWlVMF5bQ5C67sRKMw-rLkvh/view?usp=s...
I verified that I can safely boot in 4.4.0 in case.. What to do now? Thanks,
Gianluca _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5EVWGGMHRT2BI...

On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer <abawer@redhat.com> wrote:
From the info it seems that startup panics because gluster bricks cannot be mounted.
Yes, it is so This is a testbed NUC I use for testing. It has 2 disks, the one named sdb is where ovirt node has been installed. The one named sda is where I configured gluster though the wizard, configuring the 3 volumes for engine, vm, data The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv, you can confirm that by doing (replace the pv-uuid with the one from your filter):
#udevadm info /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ P: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2 N: sda2 S: disk/by-id/ata-QEMU_HARDDISK_QM00003-part2 S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
In this case sda2 is the partition of the root-lv shown by lsblk.
Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no special file created of type /dev/disk/by-id/.... See here for udevadm command on 4.4.0 that shows sdb3 that is the partition corresponding to PV of root disk https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=s...
Can you give the output of lsblk on your node?
Here lsblk as seen by 4.4.0 with gluster volumes on sda: https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=s... ANd here lsblk as seen from 4.4.2 with an empty sda: https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=s...
Can you check that the same filter is in initramfs? # lsinitrd -f /etc/lvm/lvm.conf | grep filter
Here the command from 4.4.0 that shows no filter https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=s... And here from 4.4.2 emergency mode, where I have to use the path /boot/ovirt-node-ng-4.4.2-0..../initramfs-.... because no initrd file in /boot (in screenshot you also see output of "ll /boot) https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=s...
We have the following tool on the hosts # vdsm-tool config-lvm-filter -y it only sets the filter for local lvm devices, this is run as part of deployment and upgrade when done from the engine.
If you have other volumes which have to be mounted as part of your startup then you should add their uuids to the filter as well.
I didn't anything special in 4.4.0: I installed node on the intended disk, that was seen as sdb and then through the single node hci wizard I configured the gluster volumes on sda Any suggestion on what to do on 4.4.2 initrd or running correct dracut command from 4.4.0 to correct initramfs of 4.4.2? BTW: could in the mean time if necessary also boot from 4.4.0 and let it go with engine in 4.4.2? Thanks, Gianluca

Sorry I see that there was an error in the lsinitrd command in 4.4.2, inerting the "-f" position. Here the screenshot that shows anyway no filter active: https://drive.google.com/file/d/19VmgvsHU2DhJCRzCbO9K_Xyr70x4BqXX/view?usp=s... Gianluca On Sat, Oct 3, 2020 at 6:26 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer <abawer@redhat.com> wrote:
From the info it seems that startup panics because gluster bricks cannot be mounted.
Yes, it is so This is a testbed NUC I use for testing. It has 2 disks, the one named sdb is where ovirt node has been installed. The one named sda is where I configured gluster though the wizard, configuring the 3 volumes for engine, vm, data
The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv, you can confirm that by doing (replace the pv-uuid with the one from your filter):
#udevadm info /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ P: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2 N: sda2 S: disk/by-id/ata-QEMU_HARDDISK_QM00003-part2 S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
In this case sda2 is the partition of the root-lv shown by lsblk.
Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no special file created of type /dev/disk/by-id/.... See here for udevadm command on 4.4.0 that shows sdb3 that is the partition corresponding to PV of root disk
https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=s...
Can you give the output of lsblk on your node?
Here lsblk as seen by 4.4.0 with gluster volumes on sda:
https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=s...
ANd here lsblk as seen from 4.4.2 with an empty sda:
https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=s...
Can you check that the same filter is in initramfs? # lsinitrd -f /etc/lvm/lvm.conf | grep filter
Here the command from 4.4.0 that shows no filter
https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=s...
And here from 4.4.2 emergency mode, where I have to use the path /boot/ovirt-node-ng-4.4.2-0..../initramfs-.... because no initrd file in /boot (in screenshot you also see output of "ll /boot)
https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=s...
We have the following tool on the hosts # vdsm-tool config-lvm-filter -y it only sets the filter for local lvm devices, this is run as part of deployment and upgrade when done from the engine.
If you have other volumes which have to be mounted as part of your startup then you should add their uuids to the filter as well.
I didn't anything special in 4.4.0: I installed node on the intended disk, that was seen as sdb and then through the single node hci wizard I configured the gluster volumes on sda
Any suggestion on what to do on 4.4.2 initrd or running correct dracut command from 4.4.0 to correct initramfs of 4.4.2?
BTW: could in the mean time if necessary also boot from 4.4.0 and let it go with engine in 4.4.2?
Thanks, Gianluca

On Sat, Oct 3, 2020 at 6:33 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Sorry I see that there was an error in the lsinitrd command in 4.4.2, inerting the "-f" position. Here the screenshot that shows anyway no filter active:
https://drive.google.com/file/d/19VmgvsHU2DhJCRzCbO9K_Xyr70x4BqXX/view?usp=s...
Gianluca
On Sat, Oct 3, 2020 at 6:26 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer <abawer@redhat.com> wrote:
From the info it seems that startup panics because gluster bricks cannot be mounted.
Yes, it is so This is a testbed NUC I use for testing. It has 2 disks, the one named sdb is where ovirt node has been installed. The one named sda is where I configured gluster though the wizard, configuring the 3 volumes for engine, vm, data
The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv, you can confirm that by doing (replace the pv-uuid with the one from your filter):
#udevadm info /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ P: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2 N: sda2 S: disk/by-id/ata-QEMU_HARDDISK_QM00003-part2 S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
In this case sda2 is the partition of the root-lv shown by lsblk.
Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no special file created of type /dev/disk/by-id/.... See here for udevadm command on 4.4.0 that shows sdb3 that is the partition corresponding to PV of root disk
https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=s...
Can you give the output of lsblk on your node?
Here lsblk as seen by 4.4.0 with gluster volumes on sda:
https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=s...
ANd here lsblk as seen from 4.4.2 with an empty sda:
https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=s...
Can you check that the same filter is in initramfs? # lsinitrd -f /etc/lvm/lvm.conf | grep filter
Here the command from 4.4.0 that shows no filter
https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=s...
And here from 4.4.2 emergency mode, where I have to use the path /boot/ovirt-node-ng-4.4.2-0..../initramfs-.... because no initrd file in /boot (in screenshot you also see output of "ll /boot)
https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=s...
We have the following tool on the hosts # vdsm-tool config-lvm-filter -y it only sets the filter for local lvm devices, this is run as part of deployment and upgrade when done from the engine.
If you have other volumes which have to be mounted as part of your startup then you should add their uuids to the filter as well.
I didn't anything special in 4.4.0: I installed node on the intended disk, that was seen as sdb and then through the single node hci wizard I configured the gluster volumes on sda
Any suggestion on what to do on 4.4.2 initrd or running correct dracut command from 4.4.0 to correct initramfs of 4.4.2?
BTW: could in the mean time if necessary also boot from 4.4.0 and let it go with engine in 4.4.2?
Thanks, Gianluca
Two many photos... ;-) I used the 4.4.0 initramfs. Here the output using the 4.4.2 initramfs https://drive.google.com/file/d/1yLzJzokK5C1LHNuFbNoXWHXfzFncXe0O/view?usp=s... Gianluca

On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer <abawer@redhat.com> wrote:
From the info it seems that startup panics because gluster bricks cannot be mounted.
Yes, it is so This is a testbed NUC I use for testing. It has 2 disks, the one named sdb is where ovirt node has been installed. The one named sda is where I configured gluster though the wizard, configuring the 3 volumes for engine, vm, data
The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv, you can confirm that by doing (replace the pv-uuid with the one from your filter):
#udevadm info /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ P: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2 N: sda2 S: disk/by-id/ata-QEMU_HARDDISK_QM00003-part2 S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
In this case sda2 is the partition of the root-lv shown by lsblk.
Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no special file created of type /dev/disk/by-id/....
What does "udevadm info" show for /dev/sdb3 on 4.4.2?
See here for udevadm command on 4.4.0 that shows sdb3 that is the partition corresponding to PV of root disk
https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=s...
Can you give the output of lsblk on your node?
Here lsblk as seen by 4.4.0 with gluster volumes on sda:
https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=s...
ANd here lsblk as seen from 4.4.2 with an empty sda:
https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=s...
Can you check that the same filter is in initramfs? # lsinitrd -f /etc/lvm/lvm.conf | grep filter
Here the command from 4.4.0 that shows no filter
https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=s...
And here from 4.4.2 emergency mode, where I have to use the path /boot/ovirt-node-ng-4.4.2-0..../initramfs-.... because no initrd file in /boot (in screenshot you also see output of "ll /boot)
https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=s...
We have the following tool on the hosts # vdsm-tool config-lvm-filter -y it only sets the filter for local lvm devices, this is run as part of deployment and upgrade when done from the engine.
If you have other volumes which have to be mounted as part of your startup then you should add their uuids to the filter as well.
I didn't anything special in 4.4.0: I installed node on the intended disk, that was seen as sdb and then through the single node hci wizard I configured the gluster volumes on sda
Any suggestion on what to do on 4.4.2 initrd or running correct dracut command from 4.4.0 to correct initramfs of 4.4.2?
The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see what needs to be fixed in this case.
BTW: could in the mean time if necessary also boot from 4.4.0 and let it go with engine in 4.4.2?
Might work, probably not too tested. For the gluster bricks being filtered out in 4.4.2, this seems like [1]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
Thanks, Gianluca

On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer <abawer@redhat.com> wrote:
On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer <abawer@redhat.com> wrote:
From the info it seems that startup panics because gluster bricks cannot be mounted.
Yes, it is so This is a testbed NUC I use for testing. It has 2 disks, the one named sdb is where ovirt node has been installed. The one named sda is where I configured gluster though the wizard, configuring the 3 volumes for engine, vm, data
The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv, you can confirm that by doing (replace the pv-uuid with the one from your filter):
#udevadm info /dev/disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ P: /devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sda/sda2 N: sda2 S: disk/by-id/ata-QEMU_HARDDISK_QM00003-part2 S: disk/by-id/lvm-pv-uuid-DXgufc-7riC-TqhU-f8yH-EfZt-ivvH-TVcnEQ
In this case sda2 is the partition of the root-lv shown by lsblk.
Yes it is so. Of course it works only in 4.4.0. In 4.4.2 there is no special file created of type /dev/disk/by-id/....
What does "udevadm info" show for /dev/sdb3 on 4.4.2?
See here for udevadm command on 4.4.0 that shows sdb3 that is the partition corresponding to PV of root disk
https://drive.google.com/file/d/1-bsa0BLNHINFs48X8LGUafjFnUGPCsCH/view?usp=s...
Can you give the output of lsblk on your node?
Here lsblk as seen by 4.4.0 with gluster volumes on sda:
https://drive.google.com/file/d/1Czx28YKttmO6f6ldqW7TmxV9SNWzZKSQ/view?usp=s...
ANd here lsblk as seen from 4.4.2 with an empty sda:
https://drive.google.com/file/d/1wERp9HkFxbXVM7rH3aeIAT-IdEjseoA0/view?usp=s...
Can you check that the same filter is in initramfs? # lsinitrd -f /etc/lvm/lvm.conf | grep filter
Here the command from 4.4.0 that shows no filter
https://drive.google.com/file/d/1NKXAhkjh6bqHWaDZgtbfHQ23uqODWBrO/view?usp=s...
And here from 4.4.2 emergency mode, where I have to use the path /boot/ovirt-node-ng-4.4.2-0..../initramfs-.... because no initrd file in /boot (in screenshot you also see output of "ll /boot)
https://drive.google.com/file/d/1ilZ-_GKBtkYjJX-nRTybYihL9uXBJ0da/view?usp=s...
We have the following tool on the hosts # vdsm-tool config-lvm-filter -y it only sets the filter for local lvm devices, this is run as part of deployment and upgrade when done from the engine.
If you have other volumes which have to be mounted as part of your startup then you should add their uuids to the filter as well.
I didn't anything special in 4.4.0: I installed node on the intended disk, that was seen as sdb and then through the single node hci wizard I configured the gluster volumes on sda
Any suggestion on what to do on 4.4.2 initrd or running correct dracut command from 4.4.0 to correct initramfs of 4.4.2?
The initramfs for 4.4.2 doesn't show any (wrong) filter, so i don't see what needs to be fixed in this case.
BTW: could in the mean time if necessary also boot from 4.4.0 and let it go with engine in 4.4.2?
Might work, probably not too tested.
For the gluster bricks being filtered out in 4.4.2, this seems like [1].
Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2 maintenance mode if the fs is mounted as read only, try mount -o remount,rw / sync and try to reboot 4.4.2.
Thanks, Gianluca

On Sat, Oct 3, 2020 at 9:42 PM Amit Bawer <abawer@redhat.com> wrote:
On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer <abawer@redhat.com> wrote:
For the gluster bricks being filtered out in 4.4.2, this seems like [1].
Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2 maintenance mode if the fs is mounted as read only, try
mount -o remount,rw /
sync and try to reboot 4.4.2.
Indeed if i run, when in emergency shell in 4.4.2, the command: lvs --config 'devices { filter = [ "a|.*|" ] }' I see also all the gluster volumes, so I think the update injected the nasty filter. Possibly during update the command # vdsm-tool config-lvm-filter -y was executed and erroneously created the filter? Anyway remounting read write the root filesystem and removing the filter line from lvm.conf and rebooting worked and 4.4.2 booted ok and I was able to exit global maintenance and have the engine up. Thanks Amit for the help and all the insights. Right now only two problems: 1) a long running problem that from engine web admin all the volumes are seen as up and also the storage domains up, while only the hosted engine one is up, while "data" and vmstore" are down, as I can verify from the host, only one /rhev/data-center/ mount: [root@ovirt01 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 16G 0 16G 0% /dev tmpfs 16G 16K 16G 1% /dev/shm tmpfs 16G 18M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1 133G 3.9G 129G 3% / /dev/mapper/onn-tmp 1014M 40M 975M 4% /tmp /dev/mapper/gluster_vg_sda-gluster_lv_engine 100G 9.0G 91G 9% /gluster_bricks/engine /dev/mapper/gluster_vg_sda-gluster_lv_data 500G 126G 375G 26% /gluster_bricks/data /dev/mapper/gluster_vg_sda-gluster_lv_vmstore 90G 6.9G 84G 8% /gluster_bricks/vmstore /dev/mapper/onn-home 1014M 40M 975M 4% /home /dev/sdb2 976M 307M 603M 34% /boot /dev/sdb1 599M 6.8M 593M 2% /boot/efi /dev/mapper/onn-var 15G 263M 15G 2% /var /dev/mapper/onn-var_log 8.0G 541M 7.5G 7% /var/log /dev/mapper/onn-var_crash 10G 105M 9.9G 2% /var/crash /dev/mapper/onn-var_log_audit 2.0G 79M 2.0G 4% /var/log/audit ovirt01st.lutwyn.storage:/engine 100G 10G 90G 10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine tmpfs 3.2G 0 3.2G 0% /run/user/1000 [root@ovirt01 ~]# I can also wait 10 minutes and no change. The way I use to exit from this stalled situation is power on a VM, so that obviously it fails VM f32 is down with error. Exit message: Unable to get volume size for domain d39ed9a3-3b10-46bf-b334-e8970f5deca1 volume 242d16c6-1fd9-4918-b9dd-0d477a86424c. 10/4/20 12:50:41 AM and suddenly all the data storage domains are deactivated (from engine point of view, because actually they were not active...): Storage Domain vmstore (Data Center Default) was deactivated by system because it's not visible by any of the hosts. 10/4/20 12:50:31 AM and I can go in Data Centers --> Default --> Storage and activate "vmstore" and "data" storage domains and suddenly I get them activated and filesystems mounted. [root@ovirt01 ~]# df -h | grep rhev ovirt01st.lutwyn.storage:/engine 100G 10G 90G 10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine ovirt01st.lutwyn.storage:/data 500G 131G 370G 27% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_data ovirt01st.lutwyn.storage:/vmstore 90G 7.8G 83G 9% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_vmstore [root@ovirt01 ~]# and VM starts ok now. I already reported this, but I don't know if there is yet a bugzilla open for it. 2) I see that I cannot connect to cockpit console of node. In firefox (version 80) in my Fedora 31 I get: " Secure Connection Failed An error occurred during a connection to ovirt01.lutwyn.local:9090. PR_CONNECT_RESET_ERROR The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem. Learn more… " In Chrome (build 85.0.4183.121) " Your connection is not private Attackers might be trying to steal your information from ovirt01.lutwyn.local (for example, passwords, messages, or credit cards). Learn more NET::ERR_CERT_AUTHORITY_INVALID " Click Advanced and select to go to the site " This server could not prove that it is ovirt01.lutwyn.local; its security certificate is not trusted by your computer's operating system. This may be caused by a misconfiguration or an attacker intercepting your connection." If I select " This page isn’t working ovirt01.lutwyn.local didn’t send any data. ERR_EMPTY_RESPONSE " NOTE: the ost is not resolved by DNS but I put an entry in my hosts client. On host: [root@ovirt01 ~]# systemctl status cockpit.socket --no-pager ● cockpit.socket - Cockpit Web Service Socket Loaded: loaded (/usr/lib/systemd/system/cockpit.socket; disabled; vendor preset: enabled) Active: active (listening) since Sun 2020-10-04 00:36:36 CEST; 25min ago Docs: man:cockpit-ws(8) Listen: [::]:9090 (Stream) Process: 1425 ExecStartPost=/bin/ln -snf active.motd /run/cockpit/motd (code=exited, status=0/SUCCESS) Process: 1417 ExecStartPost=/usr/share/cockpit/motd/update-motd localhost (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 202981) Memory: 1.6M CGroup: /system.slice/cockpit.socket Oct 04 00:36:36 ovirt01.lutwyn.local systemd[1]: Starting Cockpit Web Service Socket. Oct 04 00:36:36 ovirt01.lutwyn.local systemd[1]: Listening on Cockpit Web Service Socket. [root@ovirt01 ~]# [root@ovirt01 ~]# systemctl status cockpit.service --no-pager ● cockpit.service - Cockpit Web Service Loaded: loaded (/usr/lib/systemd/system/cockpit.service; static; vendor preset: disabled) Active: active (running) since Sun 2020-10-04 00:58:09 CEST; 3min 30s ago Docs: man:cockpit-ws(8) Process: 19260 ExecStartPre=/usr/sbin/remotectl certificate --ensure --user=root --group=cockpit-ws --selinux-type=etc_t (code=exited, status=0/SUCCESS) Main PID: 19263 (cockpit-tls) Tasks: 1 (limit: 202981) Memory: 1.4M CGroup: /system.slice/cockpit.service └─19263 /usr/libexec/cockpit-tls Oct 04 00:59:59 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: connect(http-redirect.sock) failed: Permission denied Oct 04 00:59:59 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: connect(http-redirect.sock) failed: Permission denied Oct 04 01:00:11 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Oct 04 01:00:11 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: connect(https-factory.sock) failed: Permission denied Oct 04 01:00:11 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Oct 04 01:00:11 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: connect(https-factory.sock) failed: Permission denied Oct 04 01:00:16 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Oct 04 01:00:16 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Oct 04 01:00:16 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Oct 04 01:00:16 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: connect(https-factory.sock) failed: Permission denied [root@ovirt01 ~]# Gianluca

On Sun, Oct 4, 2020 at 2:07 AM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sat, Oct 3, 2020 at 9:42 PM Amit Bawer <abawer@redhat.com> wrote:
On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer <abawer@redhat.com> wrote:
For the gluster bricks being filtered out in 4.4.2, this seems like [1].
Maybe remove the lvm filter from /etc/lvm/lvm.conf while in 4.4.2 maintenance mode if the fs is mounted as read only, try
mount -o remount,rw /
sync and try to reboot 4.4.2.
Indeed if i run, when in emergency shell in 4.4.2, the command:
lvs --config 'devices { filter = [ "a|.*|" ] }'
I see also all the gluster volumes, so I think the update injected the nasty filter. Possibly during update the command # vdsm-tool config-lvm-filter -y was executed and erroneously created the filter?
Since there wasn't a filter set on the node, the 4.4.2 update added the default filter for the root-lv pv if there was some filter set before the upgrade, it would not have been added by the 4.4.2 update.
Anyway remounting read write the root filesystem and removing the filter line from lvm.conf and rebooting worked and 4.4.2 booted ok and I was able to exit global maintenance and have the engine up.
Thanks Amit for the help and all the insights.
Right now only two problems:
1) a long running problem that from engine web admin all the volumes are seen as up and also the storage domains up, while only the hosted engine one is up, while "data" and vmstore" are down, as I can verify from the host, only one /rhev/data-center/ mount:
[root@ovirt01 ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 16G 0 16G 0% /dev tmpfs 16G 16K 16G 1% /dev/shm tmpfs 16G 18M 16G 1% /run tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1 133G 3.9G 129G 3% / /dev/mapper/onn-tmp 1014M 40M 975M 4% /tmp /dev/mapper/gluster_vg_sda-gluster_lv_engine 100G 9.0G 91G 9% /gluster_bricks/engine /dev/mapper/gluster_vg_sda-gluster_lv_data 500G 126G 375G 26% /gluster_bricks/data /dev/mapper/gluster_vg_sda-gluster_lv_vmstore 90G 6.9G 84G 8% /gluster_bricks/vmstore /dev/mapper/onn-home 1014M 40M 975M 4% /home /dev/sdb2 976M 307M 603M 34% /boot /dev/sdb1 599M 6.8M 593M 2% /boot/efi /dev/mapper/onn-var 15G 263M 15G 2% /var /dev/mapper/onn-var_log 8.0G 541M 7.5G 7% /var/log /dev/mapper/onn-var_crash 10G 105M 9.9G 2% /var/crash /dev/mapper/onn-var_log_audit 2.0G 79M 2.0G 4% /var/log/audit ovirt01st.lutwyn.storage:/engine 100G 10G 90G 10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine tmpfs 3.2G 0 3.2G 0% /run/user/1000 [root@ovirt01 ~]#
I can also wait 10 minutes and no change. The way I use to exit from this stalled situation is power on a VM, so that obviously it fails VM f32 is down with error. Exit message: Unable to get volume size for domain d39ed9a3-3b10-46bf-b334-e8970f5deca1 volume 242d16c6-1fd9-4918-b9dd-0d477a86424c. 10/4/20 12:50:41 AM
and suddenly all the data storage domains are deactivated (from engine point of view, because actually they were not active...): Storage Domain vmstore (Data Center Default) was deactivated by system because it's not visible by any of the hosts. 10/4/20 12:50:31 AM
and I can go in Data Centers --> Default --> Storage and activate "vmstore" and "data" storage domains and suddenly I get them activated and filesystems mounted.
[root@ovirt01 ~]# df -h | grep rhev ovirt01st.lutwyn.storage:/engine 100G 10G 90G 10% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_engine ovirt01st.lutwyn.storage:/data 500G 131G 370G 27% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_data ovirt01st.lutwyn.storage:/vmstore 90G 7.8G 83G 9% /rhev/data-center/mnt/glusterSD/ovirt01st.lutwyn.storage:_vmstore [root@ovirt01 ~]#
and VM starts ok now.
I already reported this, but I don't know if there is yet a bugzilla open for it.
Did you get any response for the original mail? haven't seen it on the users-list.
2) I see that I cannot connect to cockpit console of node.
In firefox (version 80) in my Fedora 31 I get: " Secure Connection Failed
An error occurred during a connection to ovirt01.lutwyn.local:9090. PR_CONNECT_RESET_ERROR
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem.
Learn more… " In Chrome (build 85.0.4183.121)
" Your connection is not private Attackers might be trying to steal your information from ovirt01.lutwyn.local (for example, passwords, messages, or credit cards). Learn more NET::ERR_CERT_AUTHORITY_INVALID " Click Advanced and select to go to the site
" This server could not prove that it is ovirt01.lutwyn.local; its security certificate is not trusted by your computer's operating system. This may be caused by a misconfiguration or an attacker intercepting your connection."
If I select
" This page isn’t working ovirt01.lutwyn.local didn’t send any data. ERR_EMPTY_RESPONSE "
NOTE: the ost is not resolved by DNS but I put an entry in my hosts client.
Might be required to set DNS for authenticity, maybe other members on the list could tell better.
On host:
[root@ovirt01 ~]# systemctl status cockpit.socket --no-pager ● cockpit.socket - Cockpit Web Service Socket Loaded: loaded (/usr/lib/systemd/system/cockpit.socket; disabled; vendor preset: enabled) Active: active (listening) since Sun 2020-10-04 00:36:36 CEST; 25min ago Docs: man:cockpit-ws(8) Listen: [::]:9090 (Stream) Process: 1425 ExecStartPost=/bin/ln -snf active.motd /run/cockpit/motd (code=exited, status=0/SUCCESS) Process: 1417 ExecStartPost=/usr/share/cockpit/motd/update-motd localhost (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 202981) Memory: 1.6M CGroup: /system.slice/cockpit.socket
Oct 04 00:36:36 ovirt01.lutwyn.local systemd[1]: Starting Cockpit Web Service Socket. Oct 04 00:36:36 ovirt01.lutwyn.local systemd[1]: Listening on Cockpit Web Service Socket. [root@ovirt01 ~]#
[root@ovirt01 ~]# systemctl status cockpit.service --no-pager ● cockpit.service - Cockpit Web Service Loaded: loaded (/usr/lib/systemd/system/cockpit.service; static; vendor preset: disabled) Active: active (running) since Sun 2020-10-04 00:58:09 CEST; 3min 30s ago Docs: man:cockpit-ws(8) Process: 19260 ExecStartPre=/usr/sbin/remotectl certificate --ensure --user=root --group=cockpit-ws --selinux-type=etc_t (code=exited, status=0/SUCCESS) Main PID: 19263 (cockpit-tls) Tasks: 1 (limit: 202981) Memory: 1.4M CGroup: /system.slice/cockpit.service └─19263 /usr/libexec/cockpit-tls
Oct 04 00:59:59 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: connect(http-redirect.sock) failed: Permission denied Oct 04 00:59:59 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: connect(http-redirect.sock) failed: Permission denied Oct 04 01:00:11 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Oct 04 01:00:11 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: connect(https-factory.sock) failed: Permission denied Oct 04 01:00:11 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Oct 04 01:00:11 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: connect(https-factory.sock) failed: Permission denied Oct 04 01:00:16 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Oct 04 01:00:16 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Oct 04 01:00:16 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: gnutls_handshake failed: A TLS fatal alert has been received. Oct 04 01:00:16 ovirt01.lutwyn.local cockpit-tls[19263]: cockpit-tls: connect(https-factory.sock) failed: Permission denied [root@ovirt01 ~]#
Gianluca

On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer <abawer@redhat.com> wrote:
Since there wasn't a filter set on the node, the 4.4.2 update added the default filter for the root-lv pv if there was some filter set before the upgrade, it would not have been added by the 4.4.2 update.
Do you mean that I will get the same problem upgrading from 4.4.2 to an upcoming 4.4.3, as also now I don't have any filter set? This would not be desirable....
Right now only two problems:
1) a long running problem that from engine web admin all the volumes are seen as up and also the storage domains up, while only the hosted engine one is up, while "data" and vmstore" are down, as I can verify from the host, only one /rhev/data-center/ mount:
[snip]
I already reported this, but I don't know if there is yet a bugzilla open for it.
Did you get any response for the original mail? haven't seen it on the users-list.
I think it was this thread related to 4.4.0 released and question about auto-start of VMs. A script from Derek that tested if domains were active and got false positive, and my comments about the same registered behaviour: https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UO... But I think there was no answer on that particular item/problem. Indeed I think you can easily reproduce, I don't know if only with Gluster or also with other storage domains. I don't know if it can have a part the fact that on the last host during a whole shutdown (and the only host in case of single host) you have to run the script /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh otherwise you risk not to get a complete shutdown sometimes. And perhaps this stop can have an influence on the following startup. In any case the web admin gui (and the API access) should not show the domains active when they are not. I think there is a bug in the code that checks this.
2) I see that I cannot connect to cockpit console of node.
[snip]
NOTE: the ost is not resolved by DNS but I put an entry in my hosts client.
Might be required to set DNS for authenticity, maybe other members on the list could tell better.
It would be the first time I see it. The access to web admin GUI works ok even without DNS resolution. I'm not sure if I had the same problem with the cockpit host console on 4.4.0. Gianluca

On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer <abawer@redhat.com> wrote:
Since there wasn't a filter set on the node, the 4.4.2 update added the default filter for the root-lv pv if there was some filter set before the upgrade, it would not have been added by the 4.4.2 update.
Do you mean that I will get the same problem upgrading from 4.4.2 to an upcoming 4.4.3, as also now I don't have any filter set? This would not be desirable....
Once you have got back into 4.4.2, it's recommended to set the lvm filter to fit the pvs you use on your node for the local root pv you can run # vdsm-tool config-lvm-filter -y For the gluster bricks you'll need to add their uuids to the filter as well. The next upgrade should not set a filter on its own if one is already set.
Right now only two problems:
1) a long running problem that from engine web admin all the volumes are seen as up and also the storage domains up, while only the hosted engine one is up, while "data" and vmstore" are down, as I can verify from the host, only one /rhev/data-center/ mount:
[snip]
I already reported this, but I don't know if there is yet a bugzilla open for it.
Did you get any response for the original mail? haven't seen it on the users-list.
I think it was this thread related to 4.4.0 released and question about auto-start of VMs. A script from Derek that tested if domains were active and got false positive, and my comments about the same registered behaviour:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UO...
But I think there was no answer on that particular item/problem. Indeed I think you can easily reproduce, I don't know if only with Gluster or also with other storage domains. I don't know if it can have a part the fact that on the last host during a whole shutdown (and the only host in case of single host) you have to run the script /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh otherwise you risk not to get a complete shutdown sometimes. And perhaps this stop can have an influence on the following startup. In any case the web admin gui (and the API access) should not show the domains active when they are not. I think there is a bug in the code that checks this.
If it got no response so far, I think it could be helpful to file a bug with the details of the setup and the steps involved here so it will get tracked.
2) I see that I cannot connect to cockpit console of node.
[snip]
NOTE: the ost is not resolved by DNS but I put an entry in my hosts
client.
Might be required to set DNS for authenticity, maybe other members on the list could tell better.
It would be the first time I see it. The access to web admin GUI works ok even without DNS resolution. I'm not sure if I had the same problem with the cockpit host console on 4.4.0.
Perhaps +Yedidyah Bar David <Didi@redhat.com> could help regarding cockpit web access.
Gianluca

On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer <abawer@redhat.com> wrote:
On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer <abawer@redhat.com> wrote:
Since there wasn't a filter set on the node, the 4.4.2 update added the default filter for the root-lv pv if there was some filter set before the upgrade, it would not have been added by the 4.4.2 update.
Do you mean that I will get the same problem upgrading from 4.4.2 to an upcoming 4.4.3, as also now I don't have any filter set? This would not be desirable....
Once you have got back into 4.4.2, it's recommended to set the lvm filter to fit the pvs you use on your node for the local root pv you can run # vdsm-tool config-lvm-filter -y For the gluster bricks you'll need to add their uuids to the filter as well.
vdsm-tool is expected to add all the devices needed by the mounted logical volumes, so adding devices manually should not be needed. If this does not work please file a bug and include all the info to reproduce the issue.
The next upgrade should not set a filter on its own if one is already set.
Right now only two problems:
1) a long running problem that from engine web admin all the volumes are seen as up and also the storage domains up, while only the hosted engine one is up, while "data" and vmstore" are down, as I can verify from the host, only one /rhev/data-center/ mount:
[snip]
I already reported this, but I don't know if there is yet a bugzilla open for it.
Did you get any response for the original mail? haven't seen it on the users-list.
I think it was this thread related to 4.4.0 released and question about auto-start of VMs. A script from Derek that tested if domains were active and got false positive, and my comments about the same registered behaviour: https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y4UO...
But I think there was no answer on that particular item/problem. Indeed I think you can easily reproduce, I don't know if only with Gluster or also with other storage domains. I don't know if it can have a part the fact that on the last host during a whole shutdown (and the only host in case of single host) you have to run the script /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh otherwise you risk not to get a complete shutdown sometimes. And perhaps this stop can have an influence on the following startup. In any case the web admin gui (and the API access) should not show the domains active when they are not. I think there is a bug in the code that checks this.
If it got no response so far, I think it could be helpful to file a bug with the details of the setup and the steps involved here so it will get tracked.
2) I see that I cannot connect to cockpit console of node.
[snip]
NOTE: the ost is not resolved by DNS but I put an entry in my hosts client.
Might be required to set DNS for authenticity, maybe other members on the list could tell better.
It would be the first time I see it. The access to web admin GUI works ok even without DNS resolution. I'm not sure if I had the same problem with the cockpit host console on 4.4.0.
Perhaps +Yedidyah Bar David could help regarding cockpit web access.
Gianluca
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VYWJPRKRESPBAR...

On Mon, Oct 5, 2020 at 2:19 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer <abawer@redhat.com> wrote:
On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi <
On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer <abawer@redhat.com> wrote:
Since there wasn't a filter set on the node, the 4.4.2 update added
gianluca.cecchi@gmail.com> wrote: the default filter for the root-lv pv
if there was some filter set before the upgrade, it would not have been added by the 4.4.2 update.
Do you mean that I will get the same problem upgrading from 4.4.2 to an upcoming 4.4.3, as also now I don't have any filter set? This would not be desirable....
Once you have got back into 4.4.2, it's recommended to set the lvm filter to fit the pvs you use on your node for the local root pv you can run # vdsm-tool config-lvm-filter -y For the gluster bricks you'll need to add their uuids to the filter as well.
vdsm-tool is expected to add all the devices needed by the mounted logical volumes, so adding devices manually should not be needed.
If this does not work please file a bug and include all the info to reproduce the issue.
I don't know what exactly happened when I installed ovirt-ng-node in 4.4.0, but the effect was that no filter at all was set up in lvm.conf, and so the problem I had upgrading to 4.4.2. Any way to see related logs for 4.4.0? In which phase of the install of the node itself or of the gluster based wizard is it supposed to run the vdsm-tool command? Right now in 4.4.2 I get this output, so it seems it works in 4.4.2: " [root@ovirt01 ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host: logical volume: /dev/mapper/gluster_vg_sda-gluster_lv_data mountpoint: /gluster_bricks/data devices: /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr logical volume: /dev/mapper/gluster_vg_sda-gluster_lv_engine mountpoint: /gluster_bricks/engine devices: /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr logical volume: /dev/mapper/gluster_vg_sda-gluster_lv_vmstore mountpoint: /gluster_bricks/vmstore devices: /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr logical volume: /dev/mapper/onn-home mountpoint: /home devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7 logical volume: /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1 mountpoint: / devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7 logical volume: /dev/mapper/onn-swap mountpoint: [SWAP] devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7 logical volume: /dev/mapper/onn-tmp mountpoint: /tmp devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7 logical volume: /dev/mapper/onn-var mountpoint: /var devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7 logical volume: /dev/mapper/onn-var_crash mountpoint: /var/crash devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7 logical volume: /dev/mapper/onn-var_log mountpoint: /var/log devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7 logical volume: /dev/mapper/onn-var_log_audit mountpoint: /var/log/audit devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7 This is the recommended LVM filter for this host: filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7$|", "a|^/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr$|", "r|.*|" ] This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually. To use the recommended filter we need to add multipath blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf: blacklist { wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V" wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K" } Configure host? [yes,NO] " Does this mean that answering "yes" I will get both lvm and multipath related files modified? Right now my multipath is configured this way: [root@ovirt01 ~]# grep -v "^#" /etc/multipath.conf | grep -v "^ #" | grep -v "^$" defaults { polling_interval 5 no_path_retry 4 user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096 } blacklist { protocol "(scsi:adt|scsi:sbp)" } overrides { no_path_retry 4 } [root@ovirt01 ~]# with blacklist explicit on both disks but inside different files: root disk: [root@ovirt01 ~]# cat /etc/multipath/conf.d/vdsm_blacklist.conf # This file is managed by vdsm, do not edit! # Any changes made to this file will be overwritten when running: # vdsm-tool config-lvm-filter blacklist { wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K" } [root@ovirt01 ~]# gluster disk: [root@ovirt01 ~]# cat /etc/multipath/conf.d/blacklist.conf # BEGIN ANSIBLE MANAGED BLOCK blacklist { # BEGIN ANSIBLE MANAGED BLOCK sda wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V" # END ANSIBLE MANAGED BLOCK sda } # END ANSIBLE MANAGED BLOCK [root@ovirt01 ~]# [root@ovirt01 ~]# cat /etc/multipath/wwids # Multipath wwids, Version : 1.0 # NOTE: This file is automatically maintained by multipath and multipathd. # You should not need to edit this file in normal circumstances. # # Valid WWIDs: /Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V/ [root@ovirt01 ~]# and in fact no multipath devices setup due to the blacklist sections for local disks... [root@ovirt01 ~]# multipath -l [root@ovirt01 ~]# Gianluca

On Mon, Oct 5, 2020 at 9:06 AM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Mon, Oct 5, 2020 at 2:19 AM Nir Soffer <nsoffer@redhat.com> wrote:
On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer <abawer@redhat.com> wrote:
On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer <abawer@redhat.com> wrote:
Since there wasn't a filter set on the node, the 4.4.2 update added the default filter for the root-lv pv if there was some filter set before the upgrade, it would not have been added by the 4.4.2 update.
Do you mean that I will get the same problem upgrading from 4.4.2 to an upcoming 4.4.3, as also now I don't have any filter set? This would not be desirable....
Once you have got back into 4.4.2, it's recommended to set the lvm filter to fit the pvs you use on your node for the local root pv you can run # vdsm-tool config-lvm-filter -y For the gluster bricks you'll need to add their uuids to the filter as well.
vdsm-tool is expected to add all the devices needed by the mounted logical volumes, so adding devices manually should not be needed.
If this does not work please file a bug and include all the info to reproduce the issue.
I don't know what exactly happened when I installed ovirt-ng-node in 4.4.0, but the effect was that no filter at all was set up in lvm.conf, and so the problem I had upgrading to 4.4.2. Any way to see related logs for 4.4.0? In which phase of the install of the node itself or of the gluster based wizard is it supposed to run the vdsm-tool command?
Right now in 4.4.2 I get this output, so it seems it works in 4.4.2:
" [root@ovirt01 ~]# vdsm-tool config-lvm-filter Analyzing host... Found these mounted logical volumes on this host:
logical volume: /dev/mapper/gluster_vg_sda-gluster_lv_data mountpoint: /gluster_bricks/data devices: /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
logical volume: /dev/mapper/gluster_vg_sda-gluster_lv_engine mountpoint: /gluster_bricks/engine devices: /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
logical volume: /dev/mapper/gluster_vg_sda-gluster_lv_vmstore mountpoint: /gluster_bricks/vmstore devices: /dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr
logical volume: /dev/mapper/onn-home mountpoint: /home devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
logical volume: /dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1 mountpoint: / devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
logical volume: /dev/mapper/onn-swap mountpoint: [SWAP] devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
logical volume: /dev/mapper/onn-tmp mountpoint: /tmp devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
logical volume: /dev/mapper/onn-var mountpoint: /var devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
logical volume: /dev/mapper/onn-var_crash mountpoint: /var/crash devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
logical volume: /dev/mapper/onn-var_log mountpoint: /var/log devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
logical volume: /dev/mapper/onn-var_log_audit mountpoint: /var/log/audit devices: /dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/disk/by-id/lvm-pv-uuid-52iT6N-L9sU-ubqE-6vPt-dn7T-W19c-NOXjc7$|", "a|^/dev/disk/by-id/lvm-pv-uuid-5D4JSI-vqEc-ir4o-BGnG-sZmh-ILjS-jgzICr$|", "r|.*|" ]
This filter allows LVM to access the local devices used by the hypervisor, but not shared storage owned by Vdsm. If you add a new device to the volume group, you will need to edit the filter manually.
To use the recommended filter we need to add multipath blacklist in /etc/multipath/conf.d/vdsm_blacklist.conf:
blacklist { wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V" wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K" }
Configure host? [yes,NO]
" Does this mean that answering "yes" I will get both lvm and multipath related files modified?
Yes...
Right now my multipath is configured this way:
[root@ovirt01 ~]# grep -v "^#" /etc/multipath.conf | grep -v "^ #" | grep -v "^$" defaults { polling_interval 5 no_path_retry 4 user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096 } blacklist { protocol "(scsi:adt|scsi:sbp)" } overrides { no_path_retry 4 }
This file will not change...
[root@ovirt01 ~]#
with blacklist explicit on both disks but inside different files:
root disk: [root@ovirt01 ~]# cat /etc/multipath/conf.d/vdsm_blacklist.conf # This file is managed by vdsm, do not edit! # Any changes made to this file will be overwritten when running: # vdsm-tool config-lvm-filter
blacklist { wwid "Samsung_SSD_850_EVO_M.2_250GB_S24BNXAH209481K"
Here we will have the second device...
} [root@ovirt01 ~]#
gluster disk: [root@ovirt01 ~]# cat /etc/multipath/conf.d/blacklist.conf # BEGIN ANSIBLE MANAGED BLOCK blacklist { # BEGIN ANSIBLE MANAGED BLOCK sda wwid "Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V" # END ANSIBLE MANAGED BLOCK sda } # END ANSIBLE MANAGED BLOCK
This file will not change, it was created by RHHI deploy, and vdsm knows nothing about it. RHHI should integrate with new vdsm-tool capabilities instead of duplicating the functionality.
[root@ovirt01 ~]#
[root@ovirt01 ~]# cat /etc/multipath/wwids # Multipath wwids, Version : 1.0 # NOTE: This file is automatically maintained by multipath and multipathd. # You should not need to edit this file in normal circumstances. # # Valid WWIDs: /Samsung_SSD_850_EVO_500GB_S2RBNXAH108545V/
This file will not be changed by vdsm-tool. multipath manages this file.
[root@ovirt01 ~]#
and in fact no multipath devices setup due to the blacklist sections for local disks...
Sounds good.
[root@ovirt01 ~]# multipath -l [root@ovirt01 ~]#
Gianluca

Il giorno sab 3 ott 2020 alle ore 14:16 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
oVirt Node 4.4.2 is now generally available
The oVirt project is pleased to announce the general availability of oVirt Node 4.4.2 , as of September 25th, 2020.
This release completes the oVirt 4.4.2 release published on September 17th
Thanks fir the news!
How to prevent hosts entering emergency mode after upgrade from oVirt
4.4.1
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> - Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode (if rebooted). 2.
Reboot. 3.
Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2). 4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in place. 5.
Only if not using oVirt Node: - run "dracut --force --add multipath” to rebuild initramfs with the correct filter configuration 6.
Reboot.
What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to follow the same steps as if I were in 4.4.1 or what? I would like to avoid going through 4.4.1 if possible.
I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure should work for the same case. The problematic filter in /etc/lvm/lvm.conf looks like:
# grep '^filter = ' /etc/lvm/lvm.conf filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
Thanks, Gianluca
OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0 and gluster wizard and never update until now. Updated self hosted engine to 4.4.2 without problems.
My host doesn't have any filter or global_filter set up in lvm.conf in 4.4.0.
So I update it:
[root@ovirt01 vdsm]# yum update
Please use the update command from the engine admin portal. The ansible code running from there also performs additional steps other than just yum update. +Dana Elfassy <delfassy@redhat.com> can you elaborate on other steps performed during the upgrade?
Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM CEST. Dependencies resolved.
==================================================================================================== Package Architecture Version Repository Size
==================================================================================================== Installing: ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
Transaction Summary
==================================================================================================== Install 1 Package
Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4 27% [===== ] 6.0 MB/s | 145 MB 01:45 ETA
---------------------------------------------------------------------------------------------------- Total 5.3 MB/s | 782 MB 02:28 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Installing : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Obsoleting : ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch 2/2 Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Verifying : ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch 2/2 Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch.rpm
Installed: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
Complete! [root@ovirt01 vdsm]# sync [root@ovirt01 vdsm]#
I reboot and I'm proposed 4.4.2 by default with 4.4.0 available too. But the default 4.4.2 goes into emergency mode and if I login, I see that it indeed has a filter inside lvm.conf. See the filter that the update has put in place...:
https://drive.google.com/file/d/1LNZ_9c6HJnL3dbuwd5PMjb7wIuWAPDrg/view?usp=s...
During boot I see this getting blocked:
A start job is running for dev-disk-by\x2d-id ....... the same for apparently 3 disks ( I think the gluster volumes...)
https://drive.google.com/file/d/1Yg2g5FyugfUO54E0y2JfLiabbIYXr_7f/view?usp=s...
And at emergency mode:
https://drive.google.com/file/d/1WNB0e54tw5AUTzaG_HRvrltN1-Zh_LTn/view?usp=s...
if I login and then exit
Reloading system manager configuration Starting default target
and then stumped there. After some minutes I get confirmation that I am in emergency mode and give the password again and that I jocan only reboot or see journal log
contents of output of "journalctl -xb" here:
https://drive.google.com/file/d/1AB1heOaNyWlVMF5bQ5C67sRKMw-rLkvh/view?usp=s...
I verified that I can safely boot in 4.4.0 in case.. What to do now? Thanks,
Gianluca
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>* * <https://www.redhat.com/it/forums/emea/italy-track>*

On Mon, Oct 5, 2020 at 8:31 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0 and gluster wizard and never update until now. Updated self hosted engine to 4.4.2 without problems.
My host doesn't have any filter or global_filter set up in lvm.conf in 4.4.0.
So I update it:
[root@ovirt01 vdsm]# yum update
Please use the update command from the engine admin portal. The ansible code running from there also performs additional steps other than just yum update. +Dana Elfassy <delfassy@redhat.com> can you elaborate on other steps performed during the upgrade?
Yes, in general. But for single host environments is not possible, at least I think. Because you are upgrading the host where the engine is running... Gianluca

Yes. The additional main tasks that we execute during host upgrade besides updating packages are certificates related (check for certificates validity, enroll certificates) , configuring advanced virtualization and lvm filter Dana On Mon, Oct 5, 2020 at 9:31 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno sab 3 ott 2020 alle ore 14:16 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Fri, Sep 25, 2020 at 4:06 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno ven 25 set 2020 alle ore 15:32 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Fri, Sep 25, 2020 at 1:57 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
oVirt Node 4.4.2 is now generally available
The oVirt project is pleased to announce the general availability of oVirt Node 4.4.2 , as of September 25th, 2020.
This release completes the oVirt 4.4.2 release published on September 17th
Thanks fir the news!
How to prevent hosts entering emergency mode after upgrade from oVirt
4.4.1
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> - Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you should be aware that after upgrading from 4.4.1 to 4.4.2 you may get your host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode (if rebooted). 2.
Reboot. 3.
Upgrade to 4.4.2 (redeploy in case of already being on 4.4.2). 4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in place. 5.
Only if not using oVirt Node: - run "dracut --force --add multipath” to rebuild initramfs with the correct filter configuration 6.
Reboot.
What if I'm currently in 4.4.0 and want to upgrade to 4.4.2? Do I have to follow the same steps as if I were in 4.4.1 or what? I would like to avoid going through 4.4.1 if possible.
I don't think we had someone testing 4.4.0 to 4.4.2 but above procedure should work for the same case. The problematic filter in /etc/lvm/lvm.conf looks like:
# grep '^filter = ' /etc/lvm/lvm.conf filter = ["a|^/dev/mapper/mpatha2$|", "r|.*|"]
Thanks, Gianluca
OK, so I tried on my single host HCI installed with ovirt-node-ng 4.4.0 and gluster wizard and never update until now. Updated self hosted engine to 4.4.2 without problems.
My host doesn't have any filter or global_filter set up in lvm.conf in 4.4.0.
So I update it:
[root@ovirt01 vdsm]# yum update
Please use the update command from the engine admin portal. The ansible code running from there also performs additional steps other than just yum update. +Dana Elfassy <delfassy@redhat.com> can you elaborate on other steps performed during the upgrade?
Last metadata expiration check: 0:01:38 ago on Sat 03 Oct 2020 01:09:51 PM CEST. Dependencies resolved.
==================================================================================================== Package Architecture Version Repository Size
==================================================================================================== Installing: ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8
Transaction Summary
==================================================================================================== Install 1 Package
Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4 27% [===== ] 6.0 MB/s | 145 MB 01:45 ETA
---------------------------------------------------------------------------------------------------- Total 5.3 MB/s | 782 MB 02:28 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Installing : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Obsoleting : ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch 2/2 Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/2 Verifying : ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch 2/2 Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.0-2.el8.noarch.rpm
Installed: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
Complete! [root@ovirt01 vdsm]# sync [root@ovirt01 vdsm]#
I reboot and I'm proposed 4.4.2 by default with 4.4.0 available too. But the default 4.4.2 goes into emergency mode and if I login, I see that it indeed has a filter inside lvm.conf. See the filter that the update has put in place...:
https://drive.google.com/file/d/1LNZ_9c6HJnL3dbuwd5PMjb7wIuWAPDrg/view?usp=s...
During boot I see this getting blocked:
A start job is running for dev-disk-by\x2d-id ....... the same for apparently 3 disks ( I think the gluster volumes...)
https://drive.google.com/file/d/1Yg2g5FyugfUO54E0y2JfLiabbIYXr_7f/view?usp=s...
And at emergency mode:
https://drive.google.com/file/d/1WNB0e54tw5AUTzaG_HRvrltN1-Zh_LTn/view?usp=s...
if I login and then exit
Reloading system manager configuration Starting default target
and then stumped there. After some minutes I get confirmation that I am in emergency mode and give the password again and that I jocan only reboot or see journal log
contents of output of "journalctl -xb" here:
https://drive.google.com/file/d/1AB1heOaNyWlVMF5bQ5C67sRKMw-rLkvh/view?usp=s...
I verified that I can safely boot in 4.4.0 in case.. What to do now? Thanks,
Gianluca
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

On Mon, Oct 5, 2020 at 10:37 AM Dana Elfassy <delfassy@redhat.com> wrote:
Yes. The additional main tasks that we execute during host upgrade besides updating packages are certificates related (check for certificates validity, enroll certificates) , configuring advanced virtualization and lvm filter Dana
Thanks, What if I want to directly execute on the host? Any command / pointer to run after "yum update"? This is to cover a scenario with single host, where I cannot drive it from the engine... Gianluca

In order to run the playbooks you would also need the parameters that they use - some are set on the engine side Why can't you upgrade the host from the engine admin portal? On Mon, Oct 5, 2020 at 12:31 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Mon, Oct 5, 2020 at 10:37 AM Dana Elfassy <delfassy@redhat.com> wrote:
Yes. The additional main tasks that we execute during host upgrade besides updating packages are certificates related (check for certificates validity, enroll certificates) , configuring advanced virtualization and lvm filter Dana
Thanks, What if I want to directly execute on the host? Any command / pointer to run after "yum update"? This is to cover a scenario with single host, where I cannot drive it from the engine...
Gianluca

On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy <delfassy@redhat.com> wrote:
In order to run the playbooks you would also need the parameters that they use - some are set on the engine side Why can't you upgrade the host from the engine admin portal?
Because when you upgrade a host you put it into maintenance before. And this implies no VMs in execution on it. But if you are in a single host composed environment you cannot.... Gianluca

Can you shutdown the vms just for the upgrade process? On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy <delfassy@redhat.com> wrote:
In order to run the playbooks you would also need the parameters that they use - some are set on the engine side Why can't you upgrade the host from the engine admin portal?
Because when you upgrade a host you put it into maintenance before. And this implies no VMs in execution on it. But if you are in a single host composed environment you cannot....
Gianluca

On Mon, Oct 5, 2020 at 3:13 PM Dana Elfassy <delfassy@redhat.com> wrote:
Can you shutdown the vms just for the upgrade process?
On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy <delfassy@redhat.com> wrote:
In order to run the playbooks you would also need the parameters that they use - some are set on the engine side Why can't you upgrade the host from the engine admin portal?
Because when you upgrade a host you put it into maintenance before. And this implies no VMs in execution on it. But if you are in a single host composed environment you cannot....
Gianluca
we are talking about chicken-egg problem..... You say to drive a command form the engine that is a VM that runs inside the host, but ask to shutdown VMs running on host before... This is a self hosted engine composed by only one single host. Normally I would use the procedure from the engine web admin gui, one host at a time, but with single host it is not possible..... Gianluca

On Mon, Oct 5, 2020 at 3:25 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Mon, Oct 5, 2020 at 3:13 PM Dana Elfassy <delfassy@redhat.com> wrote:
Can you shutdown the vms just for the upgrade process?
On Mon, Oct 5, 2020 at 1:57 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Mon, Oct 5, 2020 at 12:52 PM Dana Elfassy <delfassy@redhat.com> wrote:
In order to run the playbooks you would also need the parameters that they use - some are set on the engine side Why can't you upgrade the host from the engine admin portal?
Because when you upgrade a host you put it into maintenance before. And this implies no VMs in execution on it. But if you are in a single host composed environment you cannot....
Gianluca
we are talking about chicken-egg problem.....
You say to drive a command form the engine that is a VM that runs inside the host, but ask to shutdown VMs running on host before... This is a self hosted engine composed by only one single host. Normally I would use the procedure from the engine web admin gui, one host at a time, but with single host it is not possible.....
We have said several times, that it doesn't make sense to use oVirt on a single host system. So you either need to attach 2nd host to your setup (preferred) or shutdown all VMS and run manual upgrade of your host OS
Gianluca _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ZU43KQXYJO43C...
-- Martin Perina Manager, Software Engineering Red Hat Czech s.r.o.

On Tue, Oct 6, 2020 at 11:25 AM Martin Perina <mperina@redhat.com> wrote:
You say to drive a command form the engine that is a VM that runs inside the host, but ask to shutdown VMs running on host before... This is a self hosted engine composed by only one single host. Normally I would use the procedure from the engine web admin gui, one host at a time, but with single host it is not possible.....
We have said several times, that it doesn't make sense to use oVirt on a single host system. So you either need to attach 2nd host to your setup (preferred) or shutdown all VMS and run manual upgrade of your host OS
We who???? In old times there was the all-in-one setup that was substituted from single host HCI ... developers also put extra efforts to setup the wizard comprising the single host scenario..... Obviously it is aimed at test bed / devel / home environments, not production ones. Do you want me to send you the list of bugzilla contributed by users using single host environments that helped Red Hat to have a better working RHV too? Please think more deeply next time, thanks Gianluca

Hi Gianluca, please see my replies inline On Tue, Oct 6, 2020 at 11:37 AM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Oct 6, 2020 at 11:25 AM Martin Perina <mperina@redhat.com> wrote:
You say to drive a command form the engine that is a VM that runs inside the host, but ask to shutdown VMs running on host before... This is a self hosted engine composed by only one single host. Normally I would use the procedure from the engine web admin gui, one host at a time, but with single host it is not possible.....
We have said several times, that it doesn't make sense to use oVirt on a single host system. So you either need to attach 2nd host to your setup (preferred) or shutdown all VMS and run manual upgrade of your host OS
We who????
So I've spent the past hour deeply investigating our upstream documentation and you are right, we don't have any clear requirements about the minimal number of hosts in upstream oVirt documentation. But here are the facts: 1. To be able to upgrade a host either from UI/RESTAPI or manually using SSH, the host always needs to be in Maintenance: https://www.ovirt.org/documentation/administration_guide/#Updating_a_host_be... 2. To perform Reinstall or Enroll certificate of a host, the host needs to be in Maintenance mode https://www.ovirt.org/documentation/administration_guide/#Reinstalling_Hosts... 3. When host is in Maintenance mode, there are no oVirt managed VMs running on it https://www.ovirt.org/documentation/administration_guide/#Moving_a_host_to_m... 4. When engine is not running (either stopped or crashed), VMs running on hypervisor hosts are unaffected (meaning they are running independently on engine), but they are pretty much "pinned to the host they are running on" (for example VMs cannot be migrated or started/stopped (of course you can stop this VM from within) without running engine) So just using above facts here are logical conclusions: 1. Standalone engine installation with only one hypervisor host - this means that engine runs on bare metal hosts (for example engine.domain.com) and single hypervisor host is managed by it (for example host1.domain.com) - in this case scenario administrator is able to perform all maintenance task (even though at the cost that VMs running on hypervisor need to be stopped before switching to Maintenance mode), because engine is running independently on hypervisor 2. Hosted engine installation with one hypervisor hosts - this means that engine runs as a VM (for example engine.domain.com) inside a single hypervisor host, which is managed by it (for example host1.domain.com) - in this scenario maintenance of the host is very limited: - you cannot move the host to Maintenance, because hosted engine VM cannot be migrated outside a host - you can perform global Maintenance and the probably manually stop hosted engine VM, but then you don't have engine to be able to perform maintenance tasks (for example, Upgrade, Reinstall or Enroll certificates) But in both above use cases you cannot use the biggest oVirt advantage and that's a shared storage among hypervisor hosts, which allows you to perform live migration of VMs. And thanks to that feature you can perform maintenance tasks on the host(s) without interruption in providing VM services. *From the above it's obvious that we need to really clearly state that in a production environment oVirt requires to have at least 2 hypervisor hosts for full functionality.* In old times there was the all-in-one setup that was substituted from
single host HCI
All-in-one feature has been deprecated in oVirt 3.6 and fully removed in oVirt 4.0
... developers also put extra efforts to setup the wizard comprising the single host scenario.....
Yes, you are right, you can initially set up oVirt with just a single host, but it's expected that you are going to add an additional host(s) soon. Obviously it is aimed at test bed / devel / home environments, not
production ones.
Of course, for development use whatever your want, but for production you care about your setup, because you want the services your offer to run smoothly
Do you want me to send you the list of bugzilla contributed by users using single host environments that helped Red Hat to have a better working RHV too?
It's clearly stated that at least 2 hypervisors are required for hosted engine or standalone RHV installation: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/htm... But as I mentioned above, we have a bug in oVirt documentation, that such an important requirement is not clearly stated. And this is not a fault of a community, this is a fault of oVirt maintainers, that we have forgotten to mention such an important requirement in oVirt documentation and it's clearly visible, that it caused a confusion to so many users. But no matter what I or you or anyone else think or wish, oVirt is designed in a way that the engine is a brain and hypervisors are a body. And without a fully functional brain the body cannot live long. This aspect is so deeply included in overall oVirt design, that we will most probably never be able to overcome it (even though that hosted engine solution made the engine much more highly available then any other attempt before).
Please think more deeply next time, thanks
Gianluca
Regards, Martin -- Martin Perina Manager, Software Engineering Red Hat Czech s.r.o.
participants (6)
-
Amit Bawer
-
Dana Elfassy
-
Gianluca Cecchi
-
Martin Perina
-
Nir Soffer
-
Sandro Bonazzola