oVirt 4.4.0 Release is now generally available

oVirt 4.4.0 Release is now generally available The oVirt Project is excited to announce the general availability of the oVirt 4.4.0 Release, as of May 20th, 2020 This release unleashes an altogether more powerful and flexible open source virtualization solution that encompasses hundreds of individual changes and a wide range of enhancements across the engine, storage, network, user interface, and analytics, as compared to oVirt 4.3. Important notes before you install / upgrade Some of the features included in the oVirt 4.4.0 release require content that will be available in CentOS Linux 8.2 but cannot be tested on RHEL 8.2 yet due to some incompatibility in the openvswitch package that is shipped in CentOS Virt SIG, which requires rebuilding openvswitch on top of CentOS 8.2. The cluster switch type OVS is not implemented for CentOS 8 hosts. Please note that oVirt 4.4 only supports clusters and datacenters with compatibility version 4.2 and above. If clusters or datacenters are running with an older compatibility version, you need to upgrade them to at least 4.2 (4.3 is recommended). Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7 are no longer supported. For example, megaraid_sas driver is removed. If you use Enterprise Linux 8 hosts you can try to provide the necessary drivers for the deprecated hardware using the DUD method (See users mailing list thread on this at https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXEFJN... ) Installation instructions For the engine: either use the oVirt appliance or install CentOS Linux 8 minimal by following these steps: - Install the CentOS Linux 8 image from http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86... - dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm - dnf update (reboot if needed) - dnf module enable -y javapackages-tools pki-deps postgresql:12 - dnf install ovirt-engine - engine-setup For the nodes: Either use oVirt Node ISO or: - Install CentOS Linux 8 from http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86..., selecting the minimal installation. - dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm - dnf update (reboot if needed) - Attach the host to the engine and let it be deployed. Update instructionsUpdate from oVirt 4.4 Release Candidate On the engine side and on CentOS hosts, you’ll need to switch from ovirt44-pre to ovirt44 repositories. In order to do so, you need to: 1. dnf remove ovirt-release44-pre 2. rm -f /etc/yum.repos.d/ovirt-4.4-pre-dependencies.repo 3. rm -f /etc/yum.repos.d/ovirt-4.4-pre.repo 4. dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm 5. dnf update On the engine side you’ll need to run engine-setup only if you were not already on the latest release candidate. On oVirt Node, you’ll need to upgrade with: 1. Move node to maintenance 2. dnf install https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-image... 3. Reboot 4. Activate the host Update from oVirt 4.3 oVirt 4.4 is available only for CentOS 8. In-place upgrades from previous installations, based on CentOS 7, are not possible. For the engine, use backup, and restore that into a new engine. Nodes will need to be reinstalled. A 4.4 engine can still manage existing 4.3 hosts, but you can’t add new ones. For a standalone engine, please refer to upgrade procedure at https://ovirt.org/documentation/upgrade_guide/#Upgrading_from_4-3 If needed, run ovirt-engine-rename (see engine rename tool documentation at https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html ) When upgrading hosts: You need to upgrade one host at a time. 1. Turn host to maintenance. Virtual machines on that host should migrate automatically to a different host. 2. Remove it from the engine 3. Re-install it with el8 or oVirt Node as per installation instructions 4. Re-add the host to the engine Please note that you may see some issues live migrating VMs from el7 to el8. If you hit such a case, please turn off the vm on el7 host and get it started on the new el8 host in order to be able to move the next el7 host to maintenance. What’s new in oVirt 4.4.0 Release? - Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8), for both oVirt Node and standalone CentOS Linux hosts. - Easier network management and configuration flexibility with NetworkManager. - VMs based on a more modern Q35 chipset with legacy SeaBIOS and UEFI firmware. - Support for direct passthrough of local host disks to VMs. - Live migration improvements for High Performance guests. - New Windows guest tools installer based on WiX framework now moved to VirtioWin project. - Dropped support for cluster level prior to 4.2. - Dropped API/SDK v3 support deprecated in past versions. - 4K block disk support only for file-based storage. iSCSI/FC storage do not support 4K disks yet. - You can export a VM to a data domain. - You can edit floating disks. - Ansible Runner (ansible-runner) is integrated within the engine, enabling more detailed monitoring of playbooks executed from the engine. - Adding and reinstalling hosts is now completely based on Ansible, replacing ovirt-host-deploy, which is not used anymore. - The OpenStack Neutron Agent cannot be configured by oVirt anymore, it should be configured by TripleO instead. This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 8.1 * CentOS Linux (or similar) 8.1 This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 8.1 * CentOS Linux (or similar) 8.1 * oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only) See the release notes [1] for installation instructions and a list of new features and bugs fixed. If you manage more than one oVirt instance, OKD or RDO we also recommend to try ManageIQ <http://manageiq.org/>. In such a case, please be sure to take the qc2 image and not the ova image. Notes: - oVirt Appliance is already available for CentOS Linux 8 - oVirt Node NG is already available for CentOS Linux 8 Additional Resources: * Read more about the oVirt 4.4.0 release highlights: http://www.ovirt.org/release/4.4.0/ * Get more oVirt project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4.4.0/ [2] http://resources.ovirt.org/pub/ovirt-4.4/iso/ -- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> [image: |Our code is open_] <https://www.redhat.com/en/our-code-is-open> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.*

Thanks and well done everyone. No virtual release party ? ?? Regards, Paul S. ________________________________ From: Sandro Bonazzola <sbonazzo@redhat.com> Sent: 20 May 2020 13:54 To: users <users@ovirt.org>; oVirt development list <devel@ovirt.org>; infra <infra@ovirt.org> Subject: [ovirt-users] oVirt 4.4.0 Release is now generally available Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe. oVirt 4.4.0 Release is now generally available The oVirt Project is excited to announce the general availability of the oVirt 4.4.0 Release, as of May 20th, 2020 This release unleashes an altogether more powerful and flexible open source virtualization solution that encompasses hundreds of individual changes and a wide range of enhancements across the engine, storage, network, user interface, and analytics, as compared to oVirt 4.3. Important notes before you install / upgrade Some of the features included in the oVirt 4.4.0 release require content that will be available in CentOS Linux 8.2 but cannot be tested on RHEL 8.2 yet due to some incompatibility in the openvswitch package that is shipped in CentOS Virt SIG, which requires rebuilding openvswitch on top of CentOS 8.2. The cluster switch type OVS is not implemented for CentOS 8 hosts. Please note that oVirt 4.4 only supports clusters and datacenters with compatibility version 4.2 and above. If clusters or datacenters are running with an older compatibility version, you need to upgrade them to at least 4.2 (4.3 is recommended). Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7 are no longer supported. For example, megaraid_sas driver is removed. If you use Enterprise Linux 8 hosts you can try to provide the necessary drivers for the deprecated hardware using the DUD method (See users mailing list thread on this at https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXEFJNPHOXUH4HOOWRIRSB4/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fthread%2FNDSVUZSESOXEFJNPHOXUH4HOOWRIRSB4%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420503157&sdata=jjtYbjrjvGDLFF4YfbbZSNrbb8Pe9TlBOHMeGKWh3KE%3D&reserved=0> ) Installation instructions For the engine: either use the oVirt appliance or install CentOS Linux 8 minimal by following these steps: - Install the CentOS Linux 8 image from http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcentos.mirror.garr.it%2Fcentos%2F8.1.1911%2Fisos%2Fx86_64%2FCentOS-8.1.1911-x86_64-dvd1.iso&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420503157&sdata=%2FOcC6YACiUaZxQl9zHJSYl3n2sWQKBFW0b5yLNU1rMg%3D&reserved=0> - dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fresources.ovirt.org%2Fpub%2Fyum-repo%2Fovirt-release44.rpm&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420513112&sdata=P69y5Ar1kyLIll9uAu0zfDA85Fk%2BpZdidn3%2BE3G%2BEE4%3D&reserved=0> - dnf update (reboot if needed) - dnf module enable -y javapackages-tools pki-deps postgresql:12 - dnf install ovirt-engine - engine-setup For the nodes: Either use oVirt Node ISO or: - Install CentOS Linux 8 from http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86_64-dvd1.iso<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcentos.mirror.garr.it%2Fcentos%2F8.1.1911%2Fisos%2Fx86_64%2FCentOS-8.1.1911-x86_64-dvd1.iso&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420513112&sdata=DUe075aTzYSDETHVTkoBDYIq7F2P5To6UpUwIC7pRss%3D&reserved=0>, selecting the minimal installation. - dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fresources.ovirt.org%2Fpub%2Fyum-repo%2Fovirt-release44.rpm&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420523068&sdata=9hhvddEZZmHoKnYZ12XtAJhTRa4r7m64uuI8oPx7JY8%3D&reserved=0> - dnf update (reboot if needed) - Attach the host to the engine and let it be deployed. Update instructions Update from oVirt 4.4 Release Candidate On the engine side and on CentOS hosts, you’ll need to switch from ovirt44-pre to ovirt44 repositories. In order to do so, you need to: 1. dnf remove ovirt-release44-pre 2. rm -f /etc/yum.repos.d/ovirt-4.4-pre-dependencies.repo 3. rm -f /etc/yum.repos.d/ovirt-4.4-pre.repo 4. dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fresources.ovirt.org%2Fpub%2Fyum-repo%2Fovirt-release44.rpm&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420523068&sdata=9hhvddEZZmHoKnYZ12XtAJhTRa4r7m64uuI8oPx7JY8%3D&reserved=0> 5. dnf update On the engine side you’ll need to run engine-setup only if you were not already on the latest release candidate. On oVirt Node, you’ll need to upgrade with: 1. Move node to maintenance 2. dnf install https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-image-update-4.4.0-2.el8.noarch.rpm<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fresources.ovirt.org%2Fpub%2Fovirt-4.4%2Frpm%2Fel8%2Fnoarch%2Fovirt-node-ng-image-update-4.4.0-2.el8.noarch.rpm&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420523068&sdata=44WCMCOiUrFZE2gbGHLToE0mVw%2Bn%2BsHDM%2BJHdWV6qA8%3D&reserved=0> 3. Reboot 4. Activate the host Update from oVirt 4.3 oVirt 4.4 is available only for CentOS 8. In-place upgrades from previous installations, based on CentOS 7, are not possible. For the engine, use backup, and restore that into a new engine. Nodes will need to be reinstalled. A 4.4 engine can still manage existing 4.3 hosts, but you can’t add new ones. For a standalone engine, please refer to upgrade procedure at https://ovirt.org/documentation/upgrade_guide/#Upgrading_from_4-3<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fovirt.org%2Fdocumentation%2Fupgrade_guide%2F%23Upgrading_from_4-3&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420533026&sdata=B%2BeJcKheTbAvymeEtbpAS2QhEca53AMBFXcAlJ9bTM0%3D&reserved=0> If needed, run ovirt-engine-rename (see engine rename tool documentation at https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fdocumentation%2Fadmin-guide%2Fchap-Utilities.html&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420533026&sdata=3JG%2FAGvfofjjlseB3hDtx5okmQI6he%2BiiUSht%2Ba%2F3fM%3D&reserved=0> ) When upgrading hosts: You need to upgrade one host at a time. 1. Turn host to maintenance. Virtual machines on that host should migrate automatically to a different host. 2. Remove it from the engine 3. Re-install it with el8 or oVirt Node as per installation instructions 4. Re-add the host to the engine Please note that you may see some issues live migrating VMs from el7 to el8. If you hit such a case, please turn off the vm on el7 host and get it started on the new el8 host in order to be able to move the next el7 host to maintenance. What’s new in oVirt 4.4.0 Release? * Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8), for both oVirt Node and standalone CentOS Linux hosts. * Easier network management and configuration flexibility with NetworkManager. * VMs based on a more modern Q35 chipset with legacy SeaBIOS and UEFI firmware. * Support for direct passthrough of local host disks to VMs. * Live migration improvements for High Performance guests. * New Windows guest tools installer based on WiX framework now moved to VirtioWin project. * Dropped support for cluster level prior to 4.2. * Dropped API/SDK v3 support deprecated in past versions. * 4K block disk support only for file-based storage. iSCSI/FC storage do not support 4K disks yet. * You can export a VM to a data domain. * You can edit floating disks. * Ansible Runner (ansible-runner) is integrated within the engine, enabling more detailed monitoring of playbooks executed from the engine. * Adding and reinstalling hosts is now completely based on Ansible, replacing ovirt-host-deploy, which is not used anymore. * The OpenStack Neutron Agent cannot be configured by oVirt anymore, it should be configured by TripleO instead. This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 8.1 * CentOS Linux (or similar) 8.1 This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 8.1 * CentOS Linux (or similar) 8.1 * oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only) See the release notes [1] for installation instructions and a list of new features and bugs fixed. If you manage more than one oVirt instance, OKD or RDO we also recommend to try ManageIQ<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmanageiq.org%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420542982&sdata=vixhUSHYf0WwVRLA4yL1xk5GXUStHa1ZK2EyksUvbpI%3D&reserved=0>. In such a case, please be sure to take the qc2 image and not the ova image. Notes: - oVirt Appliance is already available for CentOS Linux 8 - oVirt Node NG is already available for CentOS Linux 8 Additional Resources: * Read more about the oVirt 4.4.0 release highlights: http://www.ovirt.org/release/4.4.0/<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.ovirt.org%2Frelease%2F4.4.0%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420542982&sdata=mOYKY8hjCIDZa6XVZmVf3sxy25D6LgJ1A47fv%2FCPvtM%3D&reserved=0> * Get more oVirt project updates on Twitter: https://twitter.com/ovirt<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2Fovirt&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420542982&sdata=Oq6khvuWwpwEbAh3lPirDw9mxKoeYQ%2BM7xNMwk%2FVANk%3D&reserved=0> * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.ovirt.org%2Fblog%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637255763420552936&sdata=o%2F0UMkxaMrP9UhpSLCkZpZwN%2BiQJFbkGZjih04UNWSM%3D&reserved=0> [1] http://www.ovirt.org/release/4.4.0/<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.ovirt.org%2Frelease%2F4.4.0%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420552936&sdata=iZvEkmaVsdAQeLSgfN3bdumyWt%2Bw3DwDhh9dmyeDpug%3D&reserved=0> [2] http://resources.ovirt.org/pub/ovirt-4.4/iso/<https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fresources.ovirt.org%2Fpub%2Fovirt-4.4%2Fiso%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637255763420562892&sdata=9DJfPIoNUw%2FSM%2Fc3xU7l0lWfXSuf%2FL%2FpYthsGBI4Dzs%3D&reserved=0> -- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.redhat.com%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637255763420562892&sdata=BflMepcbW4PEYIemd4wEi%2Bq%2Fpczpoq%2Bp7HMG6WjQwHo%3D&reserved=0> sbonazzo@redhat.com<mailto:sbonazzo@redhat.com> [https://static.redhat.com/libs/redhat/brand-assets/2/corp/logo--200.png]<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.redhat.com%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637255763420562892&sdata=BflMepcbW4PEYIemd4wEi%2Bq%2Fpczpoq%2Bp7HMG6WjQwHo%3D&reserved=0> [|Our code is open_]<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.redhat.com%2Fen%2Four-code-is-open&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C036224488bbe4451541008d7fcbd90ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637255763420572847&sdata=s7%2FXiXOal7HJ7R2vrod2MPfObSWfG%2FaUd%2B8nEpo19IY%3D&reserved=0> Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. To view the terms under which this email is distributed, please go to:- http://leedsbeckett.ac.uk/disclaimer/email/

My enthusiasm for CentOS8 is limited. My enthusiasm for a hard migration even more so. So how much time do I have before 4.3 becomes inoperable?

Il giorno mer 20 mag 2020 alle ore 16:33 <thomas@hoberg.net> ha scritto:
My enthusiasm for CentOS8 is limited. My enthusiasm for a hard migration even more so. So how much time do I have before 4.3 becomes inoperable?
oVirt 4.3.10 is approaching GA and we expect 4.3.11 to be released too before declaring 4.3 at the end of life. After that, 4.3 should keep working till CentOS 7 or any other repo on the system will break it with some incompatible change. I totally understand system administrators' point of view and how difficult it is to find a good maintenance window for a busy production environment, ensuring backups are recent enough, check new requirements matching, give it a try on a test environment if it's available and so on. That said, I would really encourage starting to plan a maintenance window for upgrading to 4.4 as soon as practical. It will be easier to help with upgrade from 4.3 at this time than 2 years from now when 4.3 can be broken (or new hardware replacement will be missing drivers on CentOS 7) and there won't be any additional release for fixing upgrade incompatibilities. -- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> [image: |Our code is open_] <https://www.redhat.com/en/our-code-is-open> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.*

On Wed, May 20, 2020 11:19 am, Sandro Bonazzola wrote:
Il giorno mer 20 mag 2020 alle ore 16:33 <thomas@hoberg.net> ha scritto:
My enthusiasm for CentOS8 is limited. My enthusiasm for a hard migration even more so. So how much time do I have before 4.3 becomes inoperable?
oVirt 4.3.10 is approaching GA and we expect 4.3.11 to be released too before declaring 4.3 at the end of life. After that, 4.3 should keep working till CentOS 7 or any other repo on the system will break it with some incompatible change. I totally understand system administrators' point of view and how difficult it is to find a good maintenance window for a busy production environment, ensuring backups are recent enough, check new requirements matching, give it a try on a test environment if it's available and so on. That said, I would really encourage starting to plan a maintenance window for upgrading to 4.4 as soon as practical. It will be easier to help with upgrade from 4.3 at this time than 2 years from now when 4.3 can be broken (or new hardware replacement will be missing drivers on CentOS 7) and there won't be any additional release for fixing upgrade incompatibilities.
I can't speak to other people, but the lack of "ovirt-shell" for 4.4 is a deal-breaker for me to upgrade at this time, and probably for the forseeable future. I've been working on migrating my mail server for 3 years now and still haven't finished that; migrating ovirt to a new platform that requires new startup support?? Haha. Granted, I suspect SOME of the reasons I have this script might be implemented in 4.4 (e.g. auto-start of VMs). However, my understanding of the auto-start feature is that it's really an auto-restart -- it will restart a VM that was running if the datacenter crashes, but if I shut it down manually and then "reboot" the cluster, those VMs wont come back automatically. As I am on a single-host system, I need it to start from a clean shutdown and bring up all the VMs in addition to dealing with power-outage reboots. I work from the "if it aint broke, don't fix it" camp. So I think I'm going to stick with 4.3 until I can't anymore. I am happy to share my startup script if someone else wants to port it to work with 4.4. :-) -derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

Il Mer 20 Mag 2020, 18:15 Derek Atkins <derek@ihtfp.com> ha scritto: [snip] I am happy to share my startup script if someone else wants to port it to
work with 4.4. :-)
-derek
Interesting. yes, please. We could try to convert to python or through ansible and/or leverage already existing roles/modules. Gianluca

Hi, On Wed, May 20, 2020 12:28 pm, Gianluca Cecchi wrote:
Il Mer 20 Mag 2020, 18:15 Derek Atkins <derek@ihtfp.com> ha scritto:
[snip]
I am happy to share my startup script if someone else wants to port it to
work with 4.4. :-)
-derek
Interesting. yes, please. We could try to convert to python or through ansible and/or leverage already existing roles/modules.
Gianluca
Sure, I cannot attach the script because it will get blocked by the mailer, so I'll just copy-and-paste it below (which of course means that it'll be line-wrapped, which might break it but you'll at least see what it's doing). The script does have some embedded assumptions about my system (like the number of storage domains to look for). It's broken into two parts, the script itself (start_vms.sh) and a sysconfig script that says what VMs to start. I run start_vms.sh from /etc/rc.d/rc.local: /usr/local/sbin/start_vms.sh > /var/log/start_vms 2>&1 & The /etc/sysconfig/vm_list file looks like: default_timeout=10 # Ordered list of VMs declare -a vm_list=( first-vm second-vm ) # Timeout override (otherwise use default_timeout) declare -A vm_timeout=( [first-vm]=30 ) The start_vms.sh script itself: #!/bin/bash [ -f /etc/sysconfig/vm_list ] || exit 0 . /etc/sysconfig/vm_list echo -n "Starting at " date # Wait for the engine to respond while [ `ovirt-shell -I -c -F -T 50 -E ping 2>/dev/null | grep -c success` != 1 ] do echo "Not ready... Sleeping..." sleep 60 done # Now wait for the storage domain to appear active echo -n "Engine up. Searching for disks at " ; date # The 4.3.x engine keeps stale data, so let's wait for it to update # to the correct state before we start looking for storage domains sleep 60 total_disks=`ovirt-shell -I -c -E summary | grep storage_domains-total | sed -e 's/.*: //'` # subtract one because we know we're not using the image-repository total_disks=`expr $total_disks - 1` active_disks=`ovirt-shell -I -c -E summary | grep storage_domains-active | sed -e 's/.*: //'` while [ $active_disks -lt $total_disks ] do echo "Storage Domains not active yet. Only found $active_disks/$total_disks. Waiting..." sleep 60 active_disks=`ovirt-shell -I -c -E summary | grep storage_domains-active | sed -e 's/.*: //'` done # Now wait for the data center to show up echo -n "All storage mounted. Waiting for datacenter to be up at " date while [ `ovirt-shell -I -c -E 'show datacenter Default' | grep status-state | sed -e 's/.*: //'` != 'up' ] do echo "Not ready... Sleeping..." sleep 60 done # Now start all of the VMs in the requested order. echo -n "Datacenter up. Starting VMs at "; date for vm in "${vm_list[@]}" do timeout=${vm_timeout[$vm]:-$default_timeout} ovirt-shell -I -c -E "action vm $vm start" sleep "$timeout" done Enjoy! -derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

On Wed, May 20, 2020 at 7:05 PM Derek Atkins <derek@ihtfp.com> wrote:
Hi,
On Wed, May 20, 2020 12:28 pm, Gianluca Cecchi wrote:
Il Mer 20 Mag 2020, 18:15 Derek Atkins <derek@ihtfp.com> ha scritto:
[snip]
I am happy to share my startup script if someone else wants to port it to
work with 4.4. :-)
-derek
Interesting. yes, please. We could try to convert to python or through ansible and/or leverage already existing roles/modules.
Gianluca
Sure,
I cannot attach the script because it will get blocked by the mailer, so I'll just copy-and-paste it below (which of course means that it'll be line-wrapped, which might break it but you'll at least see what it's doing).
The script does have some embedded assumptions about my system (like the number of storage domains to look for).
[snip]
Enjoy!
-derek
Hi Derek, today I played around with Ansible to accomplish, I think, what you currently do in oVirt shell. It was the occasion to learn, as always, something new: as "blocks" in Ansible dont' support looping, a workaround to get that. Furthermore I have a single host environment where it can turn usefull too... The script is supposed to be run on engine, as you already do now. There is a main script, startup.yml, that is the playbook expected to be run in a way such as: /usr/bin/ansible-playbook --vault-password-file=pwfile startup.yml Some explanations below. On engine you need ansible and ovirt-engine-sdk-python packages, that I think are installed by default. The script can be executed with a non root user. In your scripts I see that you don't explicitly set a timeout. In my ansible tasks where I have to wait (connection to engine, storage domains up and DC up), I set a delay of 10 seconds and a retries of 90 for each of them. So for each task where there is an until condition you have 15 minutes the job can wait before failing. You can adjust the values of delay and/or retries. To accomplish the loop over VMs to start them up and set their custom timeout before starting next one, I have an include of a task file: vm_block.yml Inside the directory where you put the two yml files, you will also have a vars directory, where you put your customized var files. In my example I have encrypted two of them with the ansible-vault command and then put the password inside a file named pwfile (that has to simply contain the password). Otherwise you can create in clear all the var files and call the playbook with: /usr/bin/ansible-playbook startup.yml The var files: 1) ovirt_credentials.yml 2) db_credentials.yml 3) vm_vars.yml 1) contains ovirt credentials. In clear it is something like: --- ovirt_username: "admin@internal" ovirt_password: "admin_password" ovirt_ca: "my_ovirt.pem" url_name: engine_hostname ... 2) contains engine db credentials. This is needed because I didn't find an oVirt ansible module or python API that gives you the status of storage domains. It would be nice for the ovirt_storage_domain_info ansible module to have it.... So I use connection to engine db and make a select against the storage_domains table, where the status up is expressed by the status column having a value of 3. BTW: you find the engine db credentials for your engine inside file: /etc/ovirt-engine/engine.conf.d/10-setup-database.conf Contents of the var file (you have only to update the password one, randomly generated during the first engine-setup): --- dbserver: localhost db: engine username: engine password: "TLj63Wvw2yKyDPjlJ9fEku" ... 3) contains variables for your VM ordered names and timeouts, together with the default timeout I have not encrypted this file, so its contents are in my test case --- default_timeout: 10 vms: - name: f30 timeout: 30 - name: slax ... At the end in the directory where you set up your files/directories you will have something like this: -rwx------. 1 g.cecchi g.cecchi 1343 May 24 12:19 my_ovirt.pem -rw-------. 1 g.cecchi g.cecchi 6 May 24 12:18 pwfile -rwx------. 1 g.cecchi g.cecchi 2312 May 25 00:38 startup.yml -rwx------. 1 g.cecchi g.cecchi 225 May 24 21:00 vm_block.yml vars/: total 12 -rw-------. 1 g.cecchi g.cecchi 679 May 25 00:14 db_credentials.yml -rwx------. 1 g.cecchi g.cecchi 808 May 24 21:02 ovirt_credentials.yml -rw-rw-r--. 1 g.cecchi g.cecchi 78 May 24 20:37 vm_vars.yml You can find here: startup.yml https://drive.google.com/file/d/19F16TAAfneYbMnAuUE-BlTI-VNf8l4Yx/view?usp=s... vm_block.yml https://drive.google.com/file/d/1tHCcP5pBlkjeixIF8GGdhGRNaNBGhwbR/view?usp=s... NOTE: in my case I have two storage domains unattached, the ovirt image one and an export one. So inside startup.yml you find: expected_sd_up: "{{ ( result_sd.ovirt_storage_domains | length ) - 2 }}" Change "- 2" with "- 1" for your environment. Executing the job in the rc.local you would get something like this where you redirect output: [g.cecchi@ovirt ~]$ ansible-playbook --vault-password-file=pwfile startup.yml [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [Play to start oVirt VMs] ********************************************************************* TASK [Gathering Facts] ***************************************************************************** ok: [localhost] TASK [Obtain SSO token using username/password credentials] **************************************** FAILED - RETRYING: Obtain SSO token using username/password credentials (90 retries left). .... ok: [localhost] TASK [Sleep for 60 seconds] ************************************************************************ Pausing for 60 seconds (ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort) ok: [localhost] TASK [Get storage domains] ************************************************************************* ok: [localhost] TASK [Set storage domains to be active] ************************************************************ ok: [localhost] TASK [Print expected storage domains to be up] ***************************************************** ok: [localhost] => { "msg": "Expected storage domains to be up: 3" } TASK [Query storage_domains table] ************************************************************** [WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: sd_up.query_result[0].up_sd == {{ expected_sd_up }} FAILED - RETRYING: Query storage_domains table (90 retries left). ... ok: [localhost] TASK [Print storage domains that are up] *********************************************************** ok: [localhost] => { "msg": "Number of storage domains up: 3" } TASK [Verify DC is up] ***************************************************************************** FAILED - RETRYING: Verify DC is up (90 retries left). ... ok: [localhost] TASK [External block to start VMs] ***************************************************************** included: /home/g.cecchi/vm_block.yml for localhost included: /home/g.cecchi/vm_block.yml for localhost TASK [Start VM f30] ************************************************************************************ changed: [localhost] TASK [Pause before starting next VM] *************************************************************** ok: [localhost] TASK [Start VM slax] ************************************************************************************ changed: [localhost] TASK [Pause before starting next VM] *************************************************************** ok: [localhost] TASK [Revoke SSO token] **************************************************************************** ok: [localhost] PLAY RECAP ***************************************************************************************** localhost : ok=15 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Cheers, Gianluca

Hi, (Sorry if you get this twice -- looks like it didn't like the python script in there so I'm resending without the code) Gianluca Cecchi <gianluca.cecchi@gmail.com> writes:
Hi Derek, today I played around with Ansible to accomplish, I think, what you currently do in oVirt shell. It was the occasion to learn, as always, something new: as "blocks" in Ansible dont' support looping, a workaround to get that. Furthermore I have a single host environment where it can turn usefull too... [snip]
I found the time to work on this using the Python SDK. Took me longer than I wanted but I think I've got something working now. I just haven't done a FULL test, yet, but a runtime time on the online system works (I commented out the start call). I still have two files, a vm_list.py which is a config file that contains the list of VMs, in order, and then the main program itself (start_vms.py) which is based on several of the examples available in github. Unfortunately I can't seem to send the script in email because it's getting blocked by the redhat server -- so I have no idea the best way to share it. -derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

Thanks Derek, GitHub or GitLab probably. Regards, Paul S. ________________________________ From: Derek Atkins <derek@ihtfp.com> Sent: 27 May 2020 15:50 To: Gianluca Cecchi <gianluca.cecchi@gmail.com> Cc: thomas@hoberg.net <thomas@hoberg.net>; users <users@ovirt.org> Subject: [ovirt-users] AutoStart VMs (was Re: Re: oVirt 4.4.0 Release is now generally available) Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe. Hi, (Sorry if you get this twice -- looks like it didn't like the python script in there so I'm resending without the code) Gianluca Cecchi <gianluca.cecchi@gmail.com> writes:
Hi Derek, today I played around with Ansible to accomplish, I think, what you currently do in oVirt shell. It was the occasion to learn, as always, something new: as "blocks" in Ansible dont' support looping, a workaround to get that. Furthermore I have a single host environment where it can turn usefull too... [snip]
I found the time to work on this using the Python SDK. Took me longer than I wanted but I think I've got something working now. I just haven't done a FULL test, yet, but a runtime time on the online system works (I commented out the start call). I still have two files, a vm_list.py which is a config file that contains the list of VMs, in order, and then the main program itself (start_vms.py) which is based on several of the examples available in github. Unfortunately I can't seem to send the script in email because it's getting blocked by the redhat server -- so I have no idea the best way to share it. -derek -- Derek Atkins 617-623-3745 derek@ihtfp.com https://eur02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.ihtfp.c... Computer and Internet Security Consultant _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.... oVirt Code of Conduct: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.... List Archives: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovir... To view the terms under which this email is distributed, please go to:- http://leedsbeckett.ac.uk/disclaimer/email/

On Wed, May 27, 2020 at 4:50 PM Derek Atkins <derek@ihtfp.com> wrote:
Hi,
(Sorry if you get this twice -- looks like it didn't like the python script in there so I'm resending without the code)
Gianluca Cecchi <gianluca.cecchi@gmail.com> writes:
Hi Derek, today I played around with Ansible to accomplish, I think, what you currently do in oVirt shell. It was the occasion to learn, as always, something new: as "blocks" in Ansible dont' support looping, a workaround to get that. Furthermore I have a single host environment where it can turn usefull too... [snip]
I found the time to work on this using the Python SDK. Took me longer than I wanted but I think I've got something working now. I just haven't done a FULL test, yet, but a runtime time on the online system works (I commented out the start call).
But you hated Python, didn't you? ;-) I downloaded your files, even if I'm far from knowing python.... try the ansible playbook that gives you more flexibility in my opinion Gianluca

Hi, On Wed, May 27, 2020 5:38 pm, Gianluca Cecchi wrote: [snip]
But you hated Python, didn't you? ;-)
I do. Can't stand it. Doesn't mean I can't read it and/or write it, but I have to hold my nose doing it. Syntactic white space? Eww. But Python is already installed and used and, apparently, supported.. And when I looked at the examples I found that 90% of what I needed to do was already implemented, so it turned out to be much easier than expected.
I downloaded your files, even if I'm far from knowing python....
It's pretty much a direct translation of my bash script around ovirt-shell. It does have one feature that the old code didn't, which is the ability to wait for ovirt to declare that a vm is actually "up".
try the ansible playbook that gives you more flexibility in my opinion
I've never even installed ansible, let alone tried to use it. I don't need flexibility, I need the job to get done. But I'll take a look when I get the chance. Thanks!
Gianluca
-derek PS: you (meaning whomever is "in charge" is welcome to add my script(s) to the examples repo if you feel other people would benefit from seeing it there. -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

Hi Derek, I also don't like Python (and I prefer Salt instead of Ansible), but Ansible is the wiser option /personal opinion/ . My reasons - API change , so your code will eventually will die. With Ansible - a lot of people use it and there is a high chance that some updates the Ansible module that will do the job even after the API changes. Also, Ansible is declarative , while python will need more effort. Best Regards, Strahil Nikolov На 28 май 2020 г. 4:59:16 GMT+03:00, Derek Atkins <derek@ihtfp.com> написа:
Hi,
On Wed, May 27, 2020 5:38 pm, Gianluca Cecchi wrote: [snip]
But you hated Python, didn't you? ;-)
I do. Can't stand it. Doesn't mean I can't read it and/or write it, but I have to hold my nose doing it. Syntactic white space? Eww. But Python is already installed and used and, apparently, supported.. And when I looked at the examples I found that 90% of what I needed to do was already implemented, so it turned out to be much easier than expected.
I downloaded your files, even if I'm far from knowing python....
It's pretty much a direct translation of my bash script around ovirt-shell. It does have one feature that the old code didn't, which is the ability to wait for ovirt to declare that a vm is actually "up".
try the ansible playbook that gives you more flexibility in my opinion
I've never even installed ansible, let alone tried to use it. I don't need flexibility, I need the job to get done. But I'll take a look when I get the chance. Thanks!
Gianluca
-derek
PS: you (meaning whomever is "in charge" is welcome to add my script(s) to the examples repo if you feel other people would benefit from seeing it there.

Hi, Strahil Nikolov <hunter86_bg@yahoo.com> writes:
Hi Derek,
I also don't like Python (and I prefer Salt instead of Ansible), but Ansible is the wiser option /personal opinion/ . My reasons - API change , so your code will eventually will die. With Ansible - a lot of people use it and there is a high chance that some updates the Ansible module that will do the job even after the API changes.
Thank you for your input. Turns out it's probably not an issue right now anyways because my understanding is that there is no "live" upgrade path from 4.3/7.x to 4.4/8.x. My understanding is that the only upgrade path is a re-install. If that's the case, then I suspect it will be a VERY long time until I upgrade, because I'm on a single-host production system so can't stage a reinstall the same way I can stage a "yum upgrade".
Also, Ansible is declarative , while python will need more effort.
I guess only time will tell ;) There wasn't a significant learning curve to python (as I've already had experience with it, and most of what I needed to do was already in the SDK examples). Ansible is a tool I have never even looked at, let alone tried to use it, so I suspect it would take me more than a couple hours to get it working.
Best Regards, Strahil Nikolov
-derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

On Thu, May 28, 2020 at 5:01 AM Derek Atkins <derek@ihtfp.com> wrote:
Hi,
On Wed, May 27, 2020 5:38 pm, Gianluca Cecchi wrote: [snip]
But you hated Python, didn't you? ;-)
I do. Can't stand it. Doesn't mean I can't read it and/or write it, but I have to hold my nose doing it. Syntactic white space? Eww. But Python is already installed and used and, apparently, supported.. And when I looked at the examples I found that 90% of what I needed to do was already implemented, so it turned out to be much easier than expected.
Actually there are SDKs for other languages: https://gerrit.ovirt.org/#/admin/projects/?filter=sdk https://github.com/oVirt?q=sdk&type=&language= JS is empty, but the others are more-or-less alive. Python is indeed the most "invested", at least in terms of number of example scripts, but IIUC all of them are generated, so should be complete. Didn't try to use any of them myself, though, other than python.
I downloaded your files, even if I'm far from knowing python....
It's pretty much a direct translation of my bash script around ovirt-shell. It does have one feature that the old code didn't, which is the ability to wait for ovirt to declare that a vm is actually "up".
try the ansible playbook that gives you more flexibility in my opinion
I've never even installed ansible, let alone tried to use it. I don't need flexibility, I need the job to get done. But I'll take a look when I get the chance. Thanks!
Gianluca
-derek
PS: you (meaning whomever is "in charge" is welcome to add my script(s) to the examples repo if you feel other people would benefit from seeing it there.
You are most welcome to push it yourself: https://www.ovirt.org/develop/dev-process/working-with-gerrit.html Thanks! Best regards, -- Didi

On Wed, May 20, 2020 at 7:16 PM Derek Atkins <derek@ihtfp.com> wrote:
On Wed, May 20, 2020 11:19 am, Sandro Bonazzola wrote:
Il giorno mer 20 mag 2020 alle ore 16:33 <thomas@hoberg.net> ha scritto:
My enthusiasm for CentOS8 is limited. My enthusiasm for a hard migration even more so. So how much time do I have before 4.3 becomes inoperable?
oVirt 4.3.10 is approaching GA and we expect 4.3.11 to be released too before declaring 4.3 at the end of life. After that, 4.3 should keep working till CentOS 7 or any other repo on the system will break it with some incompatible change. I totally understand system administrators' point of view and how difficult it is to find a good maintenance window for a busy production environment, ensuring backups are recent enough, check new requirements matching, give it a try on a test environment if it's available and so on. That said, I would really encourage starting to plan a maintenance window for upgrading to 4.4 as soon as practical. It will be easier to help with upgrade from 4.3 at this time than 2 years from now when 4.3 can be broken (or new hardware replacement will be missing drivers on CentOS 7) and there won't be any additional release for fixing upgrade incompatibilities.
I can't speak to other people, but the lack of "ovirt-shell" for 4.4 is a deal-breaker for me to upgrade at this time, and probably for the forseeable future. I've been working on migrating my mail server for 3 years now and still haven't finished that; migrating ovirt to a new platform that requires new startup support?? Haha.
Granted, I suspect SOME of the reasons I have this script might be implemented in 4.4 (e.g. auto-start of VMs). However, my understanding of the auto-start feature is that it's really an auto-restart -- it will restart a VM that was running if the datacenter crashes, but if I shut it down manually and then "reboot" the cluster, those VMs wont come back automatically. As I am on a single-host system, I need it to start from a clean shutdown and bring up all the VMs in addition to dealing with power-outage reboots.
I work from the "if it aint broke, don't fix it" camp. So I think I'm going to stick with 4.3 until I can't anymore.
Why not open RFE to add the feature you need? You can use the python SDK to do anything supported by oVirt API. Did you look here? https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk/examples
I am happy to share my startup script if someone else wants to port it to work with 4.4. :-)
-derek
-- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NHIGVCJOWLGUB4...

On Wed, May 20, 2020 at 7:46 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, May 20, 2020 at 7:16 PM Derek Atkins <derek@ihtfp.com> wrote:
[snip]
Granted, I suspect SOME of the reasons I have this script might be implemented in 4.4 (e.g. auto-start of VMs). However, my understanding of the auto-start feature is that it's really an auto-restart -- it will restart a VM that was running if the datacenter crashes, but if I shut it down manually and then "reboot" the cluster, those VMs wont come back automatically. As I am on a single-host system, I need it to start from a clean shutdown and bring up all the VMs in addition to dealing with power-outage reboots.
I work from the "if it aint broke, don't fix it" camp. So I think I'm going to stick with 4.3 until I can't anymore.
Why not open RFE to add the feature you need?
You can use the python SDK to do anything supported by oVirt API. Did you look here? https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk/examples
I am happy to share my startup script if someone else wants to port it to work with 4.4. :-)
-derek
In the mean time, just to better understand your environment, you say that you are in a single host environment. Can you detail where does your engine live? Is it a server outside the host or are you in a Self Hosted Engine configuration? And what are the kind of your storage domains, are they NFS served by the server itself or by Gluster on the host or external hosts or what? Gianluca

Hi, On Wed, May 20, 2020 3:06 pm, Gianluca Cecchi wrote:
In the mean time, just to better understand your environment, you say that you are in a single host environment. Can you detail where does your engine live? Is it a server outside the host or are you in a Self Hosted Engine configuration?
Self-hosted engine.
And what are the kind of your storage domains, are they NFS served by the server itself or by Gluster on the host or external hosts or what?
NFS served by the host itself. Both Host and Engine are CentOS-based systems with ovirt installed on top of it. Currently running 4.3.8; I plan to upgrade to 4.3.10 (and 7.8) once it goes GA. The start_vms.sh script is, of course, run on the engine, and runs with a user with appropriate privs to start VMs. Thanks!
Gianluca
-derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

Nir, Nir Soffer <nsoffer@redhat.com> writes:
Why not open RFE to add the feature you need?
I did -- about 3-4 years ago. SOME of them have been implemented, some have been partially implemented, but I am still waiting for ovirt to support the full VM startup functionality that I had in vmware-server from like 2007 (or earlier). Part of the issue here is that I suspect most ovirt users have multiple hosts and therefore rarely have to worry about how host-system maintenance affects the VMs, and probably live in data centers with redundant power supplies, UPSes, and backup generators. I, on the other hand, I've got a single system so when I need to perform any maintenance I need to take down everything, or if I have a power outage that outlasts my UPS, or... I want the VMs to come back up automatically -- and in a particular order (e.g., I need my DNS and KDC servers to come up before others). I filed these RFEs during the 4.0 days, which is when I first started using ovirt and put it into deployment.
You can use the python SDK to do anything supported by oVirt API. Did you look here? https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk/examples
I have looked there, but I stopped reading after seeing "python". ;) Frankly I detest python. I think it's an abomination. There are so many other, better languages out there and I don't understand why so many people like it (and worse, force it down everyone else's throats). But I'll step off my soap-box (and get off my lawn!) lol. Honestly, I already spent the time to build a tool to do what I need. I even had to update the tool going from 4.1 to 4.3 because some startup assumptions changed. I really don't want to spend the time again, time I frankly don't have right now, to re-implement what I've already got. It's easier for me to just stay put on 4.3.x. Yes, I realize that in about 2 years or so I will need to do so. I'll worry about that then. Of course, since the (partial?) functionality is only in 4.4, I really have no way to test it to make sure it does what I need, so see what I'm missing. I don't have a testbed to play with it, just my one system. Thanks, -derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

On May 21, 2020 6:08:19 PM GMT+03:00, Derek Atkins <derek@ihtfp.com> wrote:
Nir,
Nir Soffer <nsoffer@redhat.com> writes:
Why not open RFE to add the feature you need?
I did -- about 3-4 years ago. SOME of them have been implemented, some have been partially implemented, but I am still waiting for ovirt to support the full VM startup functionality that I had in vmware-server from like 2007 (or earlier).
Part of the issue here is that I suspect most ovirt users have multiple hosts and therefore rarely have to worry about how host-system maintenance affects the VMs, and probably live in data centers with redundant power supplies, UPSes, and backup generators.
I, on the other hand, I've got a single system so when I need to perform any maintenance I need to take down everything, or if I have a power outage that outlasts my UPS, or... I want the VMs to come back up automatically -- and in a particular order (e.g., I need my DNS and KDC servers to come up before others).
I filed these RFEs during the 4.0 days, which is when I first started using ovirt and put it into deployment.
You can use the python SDK to do anything supported by oVirt API. Did you look here? https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk/examples
I have looked there, but I stopped reading after seeing "python". ;) Frankly I detest python. I think it's an abomination. There are so many other, better languages out there and I don't understand why so many people like it (and worse, force it down everyone else's throats). But I'll step off my soap-box (and get off my lawn!) lol.
Honestly, I already spent the time to build a tool to do what I need. I even had to update the tool going from 4.1 to 4.3 because some startup assumptions changed. I really don't want to spend the time again, time I frankly don't have right now, to re-implement what I've already got. It's easier for me to just stay put on 4.3.x.
Yes, I realize that in about 2 years or so I will need to do so. I'll worry about that then.
Of course, since the (partial?) functionality is only in 4.4, I really have no way to test it to make sure it does what I need, so see what I'm missing. I don't have a testbed to play with it, just my one system.
Thanks,
-derek
Actually, You can use Ansible and 'uri' module to communicate wwith the engine via the API. Most probably the 'uri' module was written in python - but you don't have to deal with python code - just ansible. Also, it's worth checking the ansible Ovirt modules , as they are kept up to date evwn when the API endpoint changes. I think it won't be too hard to get a list of the VMs and then create some logic how to order them for the 'ignition'. Best Regards, Strahil Nikolov

Hi, Strahil Nikolov <hunter86_bg@yahoo.com> writes:
Actually, You can use Ansible and 'uri' module to communicate wwith the engine via the API. Most probably the 'uri' module was written in python - but you don't have to deal with python code - just ansible. Also, it's worth checking the ansible Ovirt modules , as they are kept up to date evwn when the API endpoint changes.
I think it won't be too hard to get a list of the VMs and then create some logic how to order them for the 'ignition'.
I took a much closer look at the examples yesterday and there are 2 of the 3 things I need already there: 1) test_connection.py -- make sure the engine is up 2) [ get list of total and attached storage domains ] 3) start_vm.pl -- start a VM (by name, it looks like) So really it's only #2 that is missing. There is a show_summary.py in there, but that doesn't give me *all* the code I need to piece together (but I suspect it's close to what I need as I was calling the 'summary' ovirt-shell api to get the info I needed before). I suspect I just need to pull apart the api.summary.storage_domains class to figure out what I need. Clearly there is a 'total', so I just need to figure out 'up', and it looks like I might be able to rewrite my script. Python... EWW. FTR: I don't think I need to check that the datacenter status is up; I added that in not really understanding the changes between 4.1 and 4.3. The issue is that the storage domain status isn't initialized to 'down' when the engine first comes up so my script was testing that and seeing all domains up when they really weren't.
Best Regards, Strahil Nikolov
-derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

On Fri, May 22, 2020 at 4:17 PM Derek Atkins <derek@ihtfp.com> wrote:
FTR: I don't think I need to check that the datacenter status is up; I added that in not really understanding the changes between 4.1 and 4.3. The issue is that the storage domain status isn't initialized to 'down' when the engine first comes up so my script was testing that and seeing all domains up when they really weren't.
Actually at least one time last week I had a situation where after a crash of a single host environment with 4.3.9 and gluster on host itself, all 3 gluster storage domains resulted active but actually only the engine rhev mount point was up and not the other 3 configured Something like this: /rhev/data-center/mnt/glusterSD/ovirtst.mydomai.storage:_engine While for the other 3 storage domains I only had the gluster brick active but not the filesystem mounted. In web admin gui all the storage domains was marked as active.... and as soon as I powered on the first VM the operation went into error of course and only at that time the 3 storage domains were marked as down... From the web admin gui I was able to then activate them and start VMs. I had no time to investigate more and so opening a bug for that... The problem is that in my opinion the datacenter was marked as up too, so your check would not be of great meaning. In my opinion you could crosscheck also storage domains number with expected mount points of type /rhev/data-center/mnt/... Gianluca

On May 22, 2020 6:48:43 PM GMT+03:00, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, May 22, 2020 at 4:17 PM Derek Atkins <derek@ihtfp.com> wrote:
FTR: I don't think I need to check that the datacenter status is up;
I
added that in not really understanding the changes between 4.1 and 4.3. The issue is that the storage domain status isn't initialized to 'down' when the engine first comes up so my script was testing that and seeing all domains up when they really weren't.
Actually at least one time last week I had a situation where after a crash of a single host environment with 4.3.9 and gluster on host itself, all 3 gluster storage domains resulted active but actually only the engine rhev mount point was up and not the other 3 configured Something like this: /rhev/data-center/mnt/glusterSD/ovirtst.mydomai.storage:_engine
While for the other 3 storage domains I only had the gluster brick active but not the filesystem mounted. In web admin gui all the storage domains was marked as active.... and as soon as I powered on the first VM the operation went into error of course and only at that time the 3 storage domains were marked as down... From the web admin gui I was able to then activate them and start VMs. I had no time to investigate more and so opening a bug for that... The problem is that in my opinion the datacenter was marked as up too, so your check would not be of great meaning. In my opinion you could crosscheck also storage domains number with expected mount points of type /rhev/data-center/mnt/...
Gianluca
In order to have a full operational environment you will need (they depend on each other): 1. Engine is up and healthy 2. Master storage domain is up 3. Datacenter is up 4. A host is selected for SPM 5. The storage domains for the VM are up 6. At least one Host in the cluster is up and running (usually that's the node in step 4) Best Regards, Strahil Nikolov

On Fri, May 22, 2020 at 6:18 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
5. The storage domains for the VM are up
As I wrote, it happens sometime that the storage domains are marked as up, but actually they are not (the related /rhev/data-center/mnt/... filesystem not mounted) It happened to me several times, only when restarting from a crash (not always though) and only in single host configurations. Both in NFS config (not supported actually) and in Gluster config (supposed to be supported). Gianluca

On May 22, 2020 7:23:45 PM GMT+03:00, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, May 22, 2020 at 6:18 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
5. The storage domains for the VM are up
As I wrote, it happens sometime that the storage domains are marked as up, but actually they are not (the related /rhev/data-center/mnt/... filesystem not mounted)
It happened to me several times, only when restarting from a crash (not always though) and only in single host configurations. Both in NFS config (not supported actually) and in Gluster config (supposed to be supported).
Gianluca
Usually, There should be some indicator why this has happened. Can you check the engine's log when this happens ? Best Regards, Strahil Nikolov

Ciao Sandro, just tried to re-install a Centos7 based HCI cluster because it had moved to another network. And from what I can tell it fails because Gobinda Das introduced a hot-fix into 'gluster-ansible-infra/roles/backend_setup/tasks/vdo_create.yml' that adds a '--maxDiscardSize 16M' option statically on April 13th. Only problem, the vdo 6.1 that is part of the CentOS7/oVirt 4.3.10 stack doesn't know that option at all, it seems to have been introduced for VDO 6.2 which is COS 8 based. Questions: Should this not be caught by automated testing? Where and how do I need to fill a bug report in such a way that it gets to Gobinda or his team?

On Wed, Jun 17, 2020 at 4:57 PM <thomas@hoberg.net> wrote:
Ciao Sandro,
just tried to re-install a Centos7 based HCI cluster because it had moved to another network.
And from what I can tell it fails because Gobinda Das introduced a hot-fix into 'gluster-ansible-infra/roles/backend_setup/tasks/vdo_create.yml' that adds a '--maxDiscardSize 16M' option statically on April 13th.
Only problem, the vdo 6.1 that is part of the CentOS7/oVirt 4.3.10 stack doesn't know that option at all, it seems to have been introduced for VDO 6.2 which is COS 8 based.
Questions: Should this not be caught by automated testing? Where and how do I need to fill a bug report in such a way that it gets to Gobinda or his team?
Adding Gobinda. -- Didi

May be we need to check for VDO version. Will check and update. @Prajith Kesava Prasad <pkesavap@redhat.com> Can please take a look? On Wed, Jun 17, 2020 at 7:43 PM Yedidyah Bar David <didi@redhat.com> wrote:
On Wed, Jun 17, 2020 at 4:57 PM <thomas@hoberg.net> wrote:
Ciao Sandro,
just tried to re-install a Centos7 based HCI cluster because it had
moved to another network.
And from what I can tell it fails because Gobinda Das introduced a
hot-fix into 'gluster-ansible-infra/roles/backend_setup/tasks/vdo_create.yml' that adds a '--maxDiscardSize 16M' option statically on April 13th.
Only problem, the vdo 6.1 that is part of the CentOS7/oVirt 4.3.10 stack
doesn't know that option at all, it seems to have been introduced for VDO 6.2 which is COS 8 based.
Questions: Should this not be caught by automated testing? Where and how do I need to fill a bug report in such a way that it gets
to Gobinda or his team?
Adding Gobinda. -- Didi
-- Thanks, Gobinda

Dear oVirt users, I was wondering with the release of 4.4, but having a quite difficult upgrade path; reinstalling the engine, and moving all machines to rhel/centos 8. Are there any plans to update the gluster dependencies to version 7 in the the ovirt-4.3-dependencies.repo? Or will oVirt 4.3 always be stuck at gluster version 6? Thanks Olaf

Dear oVirt users, any news on the gluster support side on oVirt 4.3. With 6.10 being possibly the latest release, it would be nice if there is an known stable upgrade path to either gluster 7 and possibly 8 for the oVirt 4.3 branch. Thanks Olaf

I have been using v7 for quite some time. Best Regards, Strahil Nikolov На 11 август 2020 г. 15:26:51 GMT+03:00, olaf.buitelaar@gmail.com написа:
Dear oVirt users,
any news on the gluster support side on oVirt 4.3. With 6.10 being possibly the latest release, it would be nice if there is an known stable upgrade path to either gluster 7 and possibly 8 for the oVirt 4.3 branch.
Thanks Olaf _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/CPHD2NBWBSAL2U...

Hi Strahil, Thanks for confirming v7 is working fine with oVirt 4.3, it being from you, gives quite some faith. If that's generally the case it would be nice if the yum repo ovirt-4.3-dependencies.repo could be updated to the gluster - v7 in the official repository e.g.; [ovirt-4.3-centos-gluster7] name=CentOS-$releasever - Gluster 7 baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-7/ gpgcheck=1 enabled=1 gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Storage keeping the gluster support up-to-date, gives at least some users the time to plan the upgrade path to oVirt 4.4, while remaining to not to run on EOL gluster's. Thanks Olaf Op di 11 aug. 2020 om 19:35 schreef Strahil Nikolov <hunter86_bg@yahoo.com>:
I have been using v7 for quite some time.
Best Regards, Strahil Nikolov
На 11 август 2020 г. 15:26:51 GMT+03:00, olaf.buitelaar@gmail.com написа:
Dear oVirt users,
any news on the gluster support side on oVirt 4.3. With 6.10 being possibly the latest release, it would be nice if there is an known stable upgrade path to either gluster 7 and possibly 8 for the oVirt 4.3 branch.
Thanks Olaf _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CPHD2NBWBSAL2U...

Hey Olaf, you can add the CentOS Storage SIG repo and patch. Best Regards, Strahil Nikolov На 11 август 2020 г. 21:27:23 GMT+03:00, Olaf Buitelaar <olaf.buitelaar@gmail.com> написа:
Hi Strahil,
Thanks for confirming v7 is working fine with oVirt 4.3, it being from you, gives quite some faith. If that's generally the case it would be nice if the yum repo ovirt-4.3-dependencies.repo could be updated to the gluster - v7 in the official repository e.g.; [ovirt-4.3-centos-gluster7] name=CentOS-$releasever - Gluster 7 baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-7/ gpgcheck=1 enabled=1 gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Storage
keeping the gluster support up-to-date, gives at least some users the time to plan the upgrade path to oVirt 4.4, while remaining to not to run on EOL gluster's.
Thanks Olaf
Op di 11 aug. 2020 om 19:35 schreef Strahil Nikolov <hunter86_bg@yahoo.com>:
I have been using v7 for quite some time.
Best Regards, Strahil Nikolov
На 11 август 2020 г. 15:26:51 GMT+03:00, olaf.buitelaar@gmail.com написа:
Dear oVirt users,
any news on the gluster support side on oVirt 4.3. With 6.10 being possibly the latest release, it would be nice if there is an known stable upgrade path to either gluster 7 and possibly 8 for the oVirt 4.3 branch.
Thanks Olaf _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CPHD2NBWBSAL2U...

Hi Strahil, It's not really clear how i can pull requests to the oVirt repo. I've found this bugzilla issue for going from v5 to v6; https://bugzilla.redhat.com/show_bug.cgi?id=1718162 with this corresponding commit; https://gerrit.ovirt.org/#/c/100701/ Would the correct route be to issue a bugzilla request for this? Thanks Olaf

Hi Olaf, yes but mark it as '[RFE]' in the name of the bug. Best Regards, Strahil Nikolov На 12 август 2020 г. 12:41:55 GMT+03:00, olaf.buitelaar@gmail.com написа:
Hi Strahil,
It's not really clear how i can pull requests to the oVirt repo. I've found this bugzilla issue for going from v5 to v6; https://bugzilla.redhat.com/show_bug.cgi?id=1718162 with this corresponding commit; https://gerrit.ovirt.org/#/c/100701/ Would the correct route be to issue a bugzilla request for this?
Thanks Olaf _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZDGSHGWMDKC5OL...

Hi Strahil, Ok done; https://bugzilla.redhat.com/show_bug.cgi?id=1868393 only it didn't allow me to select the most recent 4.3. Thanks Olaf Op wo 12 aug. 2020 om 15:58 schreef Strahil Nikolov <hunter86_bg@yahoo.com>:
Hi Olaf,
yes but mark it as '[RFE]' in the name of the bug.
Best Regards, Strahil Nikolov
На 12 август 2020 г. 12:41:55 GMT+03:00, olaf.buitelaar@gmail.com написа:
Hi Strahil,
It's not really clear how i can pull requests to the oVirt repo. I've found this bugzilla issue for going from v5 to v6; https://bugzilla.redhat.com/show_bug.cgi?id=1718162 with this corresponding commit; https://gerrit.ovirt.org/#/c/100701/ Would the correct route be to issue a bugzilla request for this?
Thanks Olaf _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZDGSHGWMDKC5OL...
participants (11)
-
Derek Atkins
-
Gianluca Cecchi
-
Gobinda Das
-
Nir Soffer
-
Olaf Buitelaar
-
olaf.buitelaar@gmail.com
-
Sandro Bonazzola
-
Staniforth, Paul
-
Strahil Nikolov
-
thomas@hoberg.net
-
Yedidyah Bar David