oVirt 4.4.0 Beta release refresh is now available for testing

oVirt 4.4.0 Beta release refresh is now available for testing The oVirt Project is excited to announce the availability of the beta release of oVirt 4.4.0 refresh for testing, as of April 3rd, 2020 This release unleashes an altogether more powerful and flexible open source virtualization solution that encompasses hundreds of individual changes and a wide range of enhancements across the engine, storage, network, user interface, and analytics on top of oVirt 4.3. Important notes before you try it Please note this is a Beta release. The oVirt Project makes no guarantees as to its suitability or usefulness. This pre-release must not be used in production. In particular, please note that upgrades from 4.3 and future upgrades from this beta to the final 4.4 release from this version are not supported. Some of the features included in oVirt 4.4.0 Beta require content that will be available in CentOS Linux 8.2 but can’t be tested on RHEL 8.2 beta yet due to some incompatibility in openvswitch package shipped in CentOS Virt SIG which requires to rebuild openvswitch on top of CentOS 8.2. Known Issues - ovirt-imageio development is still in progress. In this beta you can’t upload images to data domains using the engine web application. You can still copy iso images into the deprecated ISO domain for installing VMs or upload and download to/from data domains is fully functional via the REST API and SDK. For uploading and downloading via the SDK, please see: - https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_di... - https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_... Both scripts are standalone command line tools, try --help for more info. Installation instructions For the engine: either use appliance or: - Install CentOS Linux 8 minimal from http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86... - dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm - dnf update (reboot if needed) - dnf module enable -y javapackages-tools pki-deps 389-ds - dnf install ovirt-engine - engine-setup For the nodes: Either use oVirt Node ISO or: - Install CentOS Linux 8 from http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86... ; select minimal installation - dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm - dnf update (reboot if needed) - Attach the host to engine and let it be deployed. What’s new in oVirt 4.4.0 Beta? - Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8), for both oVirt Node and standalone CentOS Linux hosts - Easier network management and configuration flexibility with NetworkManager - VMs based on a more modern Q35 chipset with legacy seabios and UEFI firmware - Support for direct passthrough of local host disks to VMs - Live migration improvements for High Performance guests. - New Windows Guest tools installer based on WiX framework now moved to VirtioWin project - Dropped support for cluster level prior to 4.2 - Dropped SDK3 support - 4K disks support only for file based storage. iSCSI/FC storage do not support 4k disks yet. - Exporting a VM to a data domain - Editing of floating disks - Integrating ansible-runner into engine, which allows a more detailed monitoring of playbooks executed from engine - Adding/reinstalling hosts are now completely based on Ansible - The OpenStack Neutron Agent cannot be configured by oVirt anymore, it should be configured by TripleO instead This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 8.1 * CentOS Linux (or similar) 8.1 This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 8.1 * CentOS Linux (or similar) 8.1 * oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only) See the release notes [1] for installation instructions and a list of new features and bugs fixed. If you manage more than one oVirt instance, OKD or RDO we also recommend to try ManageIQ <http://manageiq.org/>. In such a case, please be sure to take the qc2 image and not the ova image. Notes: - oVirt Appliance is already available for CentOS Linux 8 - oVirt Node NG is already available for CentOS Linux 8 Additional Resources: * Read more about the oVirt 4.4.0 release highlights: http://www.ovirt.org/release/4.4.0/ * Get more oVirt project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4.4.0/ [2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/ -- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>* <https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>* *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.*

On April 3, 2020 5:19:35 PM GMT+03:00, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
oVirt 4.4.0 Beta release refresh is now available for testing
The oVirt Project is excited to announce the availability of the beta release of oVirt 4.4.0 refresh for testing, as of April 3rd, 2020
This release unleashes an altogether more powerful and flexible open source virtualization solution that encompasses hundreds of individual changes and a wide range of enhancements across the engine, storage, network, user interface, and analytics on top of oVirt 4.3.
Important notes before you try it
Please note this is a Beta release.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
In particular, please note that upgrades from 4.3 and future upgrades from this beta to the final 4.4 release from this version are not supported.
Some of the features included in oVirt 4.4.0 Beta require content that will be available in CentOS Linux 8.2 but can’t be tested on RHEL 8.2 beta yet due to some incompatibility in openvswitch package shipped in CentOS Virt SIG which requires to rebuild openvswitch on top of CentOS 8.2.
Known Issues
-
ovirt-imageio development is still in progress. In this beta you can’t upload images to data domains using the engine web application. You can still copy iso images into the deprecated ISO domain for installing VMs or upload and download to/from data domains is fully functional via the REST API and SDK. For uploading and downloading via the SDK, please see: - https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_di... - https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_... Both scripts are standalone command line tools, try --help for more info.
Installation instructions
For the engine: either use appliance or:
- Install CentOS Linux 8 minimal from http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86...
- dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps 389-ds
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-x86... ; select minimal installation
- dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- Attach the host to engine and let it be deployed.
What’s new in oVirt 4.4.0 Beta?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8), for both oVirt Node and standalone CentOS Linux hosts -
Easier network management and configuration flexibility with NetworkManager -
VMs based on a more modern Q35 chipset with legacy seabios and UEFI firmware -
Support for direct passthrough of local host disks to VMs -
Live migration improvements for High Performance guests. -
New Windows Guest tools installer based on WiX framework now moved to VirtioWin project -
Dropped support for cluster level prior to 4.2 -
Dropped SDK3 support -
4K disks support only for file based storage. iSCSI/FC storage do not support 4k disks yet. -
Exporting a VM to a data domain -
Editing of floating disks -
Integrating ansible-runner into engine, which allows a more detailed monitoring of playbooks executed from engine -
Adding/reinstalling hosts are now completely based on Ansible -
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it should be configured by TripleO instead
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights: http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
Hey Sandro, Can you clarify which CPUs will not be supported in 4.4 ? Also, does oVirt 4.4 support teaming or it is still staying with bonding. Network Manager was mentioned, but it's not very clear. What is the version of gluster bundled with 4.4 ? Best Regards, Strahil Nikolov

Il giorno dom 5 apr 2020 alle ore 19:32 Strahil Nikolov < hunter86_bg@yahoo.com> ha scritto:
Hey Sandro,
Can you clarify which CPUs will not be supported in 4.4 ?
I can give the list of supported CPU according to ovirt-engine code: select fn_db_add_config_value('ServerCPUList', '1:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64; ' || '2:Secure Intel Nehalem Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; ' || '4:Secure Intel Westmere Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '5:Intel SandyBridge Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64; ' || '6:Secure Intel SandyBridge Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64; ' || '8:Secure Intel IvyBridge Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '9:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; ' || '10:Secure Intel Haswell Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell:Haswell,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '11:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64; ' || '12:Secure Intel Broadwell Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell:Broadwell,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '13:Intel Skylake Client Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64; ' || '14:Secure Intel Skylake Client Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '15:Intel Skylake Server Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64; ' || '16:Secure Intel Skylake Server Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '17:Intel Cascadelake Server Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64; ' || '18:Secure Intel Cascadelake Server Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64; ' || '1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; ' || '2:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; ' || '3:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64; ' || '4:Secure AMD EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64; ' || '1:IBM POWER8:powernv,model_POWER8:POWER8:ppc64; ' || '2:IBM POWER9:powernv,model_POWER9:POWER9:ppc64; ' || '1:IBM z114, z196:sie,model_z196-base:z196-base:s390x; ' || '2:IBM zBC12, zEC12:sie,model_zEC12-base:zEC12-base:s390x; ' || '3:IBM z13s, z13:sie,model_z13-base:z13-base:s390x; ' || '4:IBM z14:sie,model_z14-base:z14-base:s390x;', '4.4');
Also, does oVirt 4.4 support teaming or it is still staying with bonding. Network Manager was mentioned, but it's not very clear.
+Dominik Holler <dholler@redhat.com> can you please reply to this?
What is the version of gluster bundled with 4.4 ?
Latest Gluster 7 shipped by CentOS Storage SIG, right now it is 7.4 ( https://docs.gluster.org/en/latest/release-notes/7.4/)
Best Regards, Strahil Nikolov
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>* <https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>* *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

On April 6, 2020 10:47:33 AM GMT+03:00, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno dom 5 apr 2020 alle ore 19:32 Strahil Nikolov < hunter86_bg@yahoo.com> ha scritto:
Hey Sandro,
Can you clarify which CPUs will not be supported in 4.4 ?
I can give the list of supported CPU according to ovirt-engine code:
select fn_db_add_config_value('ServerCPUList', '1:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64; ' || '2:Secure Intel Nehalem Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; ' || '4:Secure Intel Westmere Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '5:Intel SandyBridge Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64; ' || '6:Secure Intel SandyBridge Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64; ' || '8:Secure Intel IvyBridge Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '9:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; ' || '10:Secure Intel Haswell Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell:Haswell,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '11:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64; ' || '12:Secure Intel Broadwell Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell:Broadwell,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '13:Intel Skylake Client Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64; ' || '14:Secure Intel Skylake Client Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '15:Intel Skylake Server Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64; ' || '16:Secure Intel Skylake Server Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '17:Intel Cascadelake Server Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64; ' || '18:Secure Intel Cascadelake Server Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64; ' || '1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; ' || '2:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; ' || '3:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64; ' || '4:Secure AMD EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64; ' || '1:IBM POWER8:powernv,model_POWER8:POWER8:ppc64; ' || '2:IBM POWER9:powernv,model_POWER9:POWER9:ppc64; ' || '1:IBM z114, z196:sie,model_z196-base:z196-base:s390x; ' || '2:IBM zBC12, zEC12:sie,model_zEC12-base:zEC12-base:s390x; ' || '3:IBM z13s, z13:sie,model_z13-base:z13-base:s390x; ' || '4:IBM z14:sie,model_z14-base:z14-base:s390x;', '4.4');
Also, does oVirt 4.4 support teaming or it is still staying with bonding. Network Manager was mentioned, but it's not very clear.
+Dominik Holler <dholler@redhat.com> can you please reply to this?
What is the version of gluster bundled with 4.4 ?
Latest Gluster 7 shipped by CentOS Storage SIG, right now it is 7.4 ( https://docs.gluster.org/en/latest/release-notes/7.4/)
Best Regards, Strahil Nikolov
Thanks Sandro, for your prompt reply. When is oVirt 4.4 GA expected ? Currently oVirt 4.4 beta doesn't support migration to 4.4 GA, which is the main reason for my hesitation to switch over. Sadly, my v4.3 is currently having storage issues (can't activate my storage domains) and I am considering to switch to 4.4 beta or power off the lab. The main question for me would be 'the time left' till GA. Best Regards, Strahil Nikolov

Il giorno lun 6 apr 2020 alle ore 19:53 Strahil Nikolov < hunter86_bg@yahoo.com> ha scritto:
On April 6, 2020 10:47:33 AM GMT+03:00, Sandro Bonazzola < sbonazzo@redhat.com> wrote:
Il giorno dom 5 apr 2020 alle ore 19:32 Strahil Nikolov < hunter86_bg@yahoo.com> ha scritto:
Hey Sandro,
Can you clarify which CPUs will not be supported in 4.4 ?
I can give the list of supported CPU according to ovirt-engine code:
select fn_db_add_config_value('ServerCPUList', '1:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64; ' || '2:Secure Intel Nehalem
Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; ' || '4:Secure Intel Westmere
Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '5:Intel SandyBridge Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64; ' || '6:Secure Intel SandyBridge
Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64; ' || '8:Secure Intel IvyBridge
Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '9:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; ' || '10:Secure Intel Haswell
Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell:Haswell,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '11:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64; ' || '12:Secure Intel Broadwell
Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell:Broadwell,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '13:Intel Skylake Client Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64; ' || '14:Secure Intel Skylake Client
Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '15:Intel Skylake Server Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64; ' || '16:Secure Intel Skylake Server
Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '17:Intel Cascadelake Server
Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64; ' || '18:Secure Intel Cascadelake Server
Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64; ' || '1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; ' || '2:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; ' || '3:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64; ' || '4:Secure AMD EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64; ' || '1:IBM POWER8:powernv,model_POWER8:POWER8:ppc64; ' || '2:IBM POWER9:powernv,model_POWER9:POWER9:ppc64; ' || '1:IBM z114, z196:sie,model_z196-base:z196-base:s390x; ' || '2:IBM zBC12, zEC12:sie,model_zEC12-base:zEC12-base:s390x; ' || '3:IBM z13s, z13:sie,model_z13-base:z13-base:s390x; ' || '4:IBM z14:sie,model_z14-base:z14-base:s390x;', '4.4');
Also, does oVirt 4.4 support teaming or it is still staying with bonding. Network Manager was mentioned, but it's not very clear.
+Dominik Holler <dholler@redhat.com> can you please reply to this?
What is the version of gluster bundled with 4.4 ?
Latest Gluster 7 shipped by CentOS Storage SIG, right now it is 7.4 ( https://docs.gluster.org/en/latest/release-notes/7.4/)
Best Regards, Strahil Nikolov
Thanks Sandro,
for your prompt reply.
When is oVirt 4.4 GA expected ? Currently oVirt 4.4 beta doesn't support migration to 4.4 GA, which is the main reason for my hesitation to switch over.
Sadly, my v4.3 is currently having storage issues (can't activate my storage domains) and I am considering to switch to 4.4 beta or power off the lab. The main question for me would be 'the time left' till GA.
The short answer is: 4.4 will GA as soon as it will be ready. To elaborate, we are still missing to finish the work on ovirt-imageio, once it will be in, we'll switch to RC phase. If no critical blockers will show up, we'll release 4.4.0 GA in a short loop.
Best Regards, Strahil Nikolov
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>* <https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>* *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

On Mon, Apr 6, 2020 at 9:48 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno dom 5 apr 2020 alle ore 19:32 Strahil Nikolov < hunter86_bg@yahoo.com> ha scritto:
Hey Sandro,
Can you clarify which CPUs will not be supported in 4.4 ?
I can give the list of supported CPU according to ovirt-engine code:
select fn_db_add_config_value('ServerCPUList', '1:Intel Nehalem Family:vmx,nx,model_Nehalem:Nehalem:x86_64; ' || '2:Secure Intel Nehalem Family:vmx,spec_ctrl,ssbd,md_clear,model_Nehalem:Nehalem,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '3:Intel Westmere Family:aes,vmx,nx,model_Westmere:Westmere:x86_64; ' || '4:Secure Intel Westmere Family:aes,vmx,spec_ctrl,ssbd,md_clear,model_Westmere:Westmere,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '5:Intel SandyBridge Family:vmx,nx,model_SandyBridge:SandyBridge:x86_64; ' || '6:Secure Intel SandyBridge Family:vmx,spec_ctrl,ssbd,md_clear,model_SandyBridge:SandyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '7:Intel IvyBridge Family:vmx,nx,model_IvyBridge:IvyBridge:x86_64; ' || '8:Secure Intel IvyBridge Family:vmx,spec_ctrl,ssbd,md_clear,model_IvyBridge:IvyBridge,+pcid,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '9:Intel Haswell Family:vmx,nx,model_Haswell:Haswell:x86_64; ' || '10:Secure Intel Haswell Family:vmx,spec_ctrl,ssbd,md_clear,model_Haswell:Haswell,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '11:Intel Broadwell Family:vmx,nx,model_Broadwell:Broadwell:x86_64; ' || '12:Secure Intel Broadwell Family:vmx,spec_ctrl,ssbd,md_clear,model_Broadwell:Broadwell,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '13:Intel Skylake Client Family:vmx,nx,model_Skylake-Client:Skylake-Client:x86_64; ' || '14:Secure Intel Skylake Client Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Client:Skylake-Client,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '15:Intel Skylake Server Family:vmx,nx,model_Skylake-Server:Skylake-Server:x86_64; ' || '16:Secure Intel Skylake Server Family:vmx,spec_ctrl,ssbd,md_clear,model_Skylake-Server:Skylake-Server,+spec-ctrl,+ssbd,+md-clear:x86_64; ' || '17:Intel Cascadelake Server Family:vmx,model_Cascadelake-Server:Cascadelake-Server,-hle,-rtm,+arch-capabilities:x86_64; ' || '18:Secure Intel Cascadelake Server Family:vmx,md-clear,mds-no,model_Cascadelake-Server:Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities:x86_64; ' || '1:AMD Opteron G4:svm,nx,model_Opteron_G4:Opteron_G4:x86_64; ' || '2:AMD Opteron G5:svm,nx,model_Opteron_G5:Opteron_G5:x86_64; ' || '3:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64; ' || '4:Secure AMD EPYC:svm,nx,ibpb,ssbd,model_EPYC:EPYC,+ibpb,+virt-ssbd:x86_64; ' || '1:IBM POWER8:powernv,model_POWER8:POWER8:ppc64; ' || '2:IBM POWER9:powernv,model_POWER9:POWER9:ppc64; ' || '1:IBM z114, z196:sie,model_z196-base:z196-base:s390x; ' || '2:IBM zBC12, zEC12:sie,model_zEC12-base:zEC12-base:s390x; ' || '3:IBM z13s, z13:sie,model_z13-base:z13-base:s390x; ' || '4:IBM z14:sie,model_z14-base:z14-base:s390x;', '4.4');
Also, does oVirt 4.4 support teaming or it is still staying with bonding. Network Manager was mentioned, but it's not very clear.
+Dominik Holler <dholler@redhat.com> can you please reply to this?
oVirt stays with bonding. Reasons why oVirt should support teaming are gathered in *Bug 1351510* <https://bugzilla.redhat.com/show_bug.cgi?id=1351510> - [RFE] Support using Team devices instead of bond devices https://bugzilla.redhat.com/show_bug.cgi?id=1351510
What is the version of gluster bundled with 4.4 ?
Latest Gluster 7 shipped by CentOS Storage SIG, right now it is 7.4 ( https://docs.gluster.org/en/latest/release-notes/7.4/)
Best Regards, Strahil Nikolov
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>* <https://www.redhat.com/en/summit?sc_cid=7013a000002D2QxAAK>* *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
participants (3)
-
Dominik Holler
-
Sandro Bonazzola
-
Strahil Nikolov