[ANN] oVirt 4.3.6 is now generally available

The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019. This update is the sixth in a series of stabilization updates to the 4.3 series. This release is available now on x86_64 architecture for: * Red Hat Enterprise Linux 7.7 or later (but < 8) * CentOS Linux (or similar) 7.7 or later (but < 8) This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for: * Red Hat Enterprise Linux 7.7 or later (but < 8) * CentOS Linux (or similar) 7.7 or later (but < 8) * oVirt Node 4.3 (available for x86_64 only) Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28. We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release. See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance is already available - oVirt Node is already available[2] oVirt Node and Appliance have been updated including: - oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/ - Wildfly 17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/ - Latest CentOS 7.7 updates including: - Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html> - CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html> - CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html> - CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html> - CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html> - CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html> - CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html> - CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html> - CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html> - CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html> - CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html> - CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html> - CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html> - CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html> - CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html> - CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html> - CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html> - CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html> - latest CentOS Virt and Storage SIG updates: - Ansible 2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8... - Glusterfs 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/ - QEMU KVM EV 2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484 Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical. Additional Resources: * Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4.3.6/ [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/ -- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

On Thu, Sep 26, 2019 at 5:02 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Does this mean that CentOS 7.6 is not supported any more starting from 4.3.6? Due to the fact that eg 4.3.5 was only supported on CentOS < 7.7 how should it be the correct flow of updating OS and oVirt versions in this case? Both for plain CentOS hosts and engine... Thanks, Gianluca

Il giorno gio 26 set 2019 alle ore 17:11 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Thu, Sep 26, 2019 at 5:02 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Does this mean that CentOS 7.6 is not supported any more starting from 4.3.6? Due to the fact that eg 4.3.5 was only supported on CentOS < 7.7 how should it be the correct flow of updating OS and oVirt versions in this case? Both for plain CentOS hosts and engine...
4.3.5 will work with CentOS 7.7 too. https://lists.ovirt.org/archives/list/announce@ovirt.org/thread/DJF37K7TQFTR... says 7.6 or later but < 8
Thanks,
Gianluca
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

On Thu, Sep 26, 2019 at 5:23 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Does this mean that CentOS 7.6 is not supported any more starting from 4.3.6? Due to the fact that eg 4.3.5 was only supported on CentOS < 7.7 how should it be the correct flow of updating OS and oVirt versions in this case? Both for plain CentOS hosts and engine...
4.3.5 will work with CentOS 7.7 too.
https://lists.ovirt.org/archives/list/announce@ovirt.org/thread/DJF37K7TQFTR... says 7.6 or later but < 8
Sure, but at date of announce of oVirt 4.3.5 (30/07), CentOS 7.7 was not available yet (17/09)... so that phrase was a bit speculation... ;-) In my case I currently have both my separate server engine and 3 plain hosts in CentOS 7.6 + 4.3.5 What would it be the workflow? Did you test this scenario, that I think will be very common with users that upgrades often (eg in their test env)? engine 1) yum update to update OS I think versionlock plugin of oVirt will prevent update of its core parts, correct? I see many ovirt related packages put in. See here: https://drive.google.com/file/d/1IvvwfJGgzdn6qkrI7d-WBAShRxUTdC1g/view?usp=s... versionlock prevented: [g.cecchi@ovmgr1 ~]$ sudo yum versionlock status Loaded plugins: fastestmirror, langpacks, versionlock Repository centos-sclo-rh-release is listed more than once in the configuration Loading mirror speeds from cached hostfile * base: centos.mirror.garr.it * epel-util: epel.besthosting.ua * extras: centos.mirror.garr.it * ovirt-4.3: ftp.nluug.nl * ovirt-4.3-epel: epel.besthosting.ua * updates: centos.mirror.garr.it 0:ovirt-engine-webadmin-portal-4.3.6.6-1.el7.* 0:ovirt-engine-ui-extensions-1.0.10-1.el7.* 0:ovirt-engine-dwh-4.3.6-1.el7.* 0:ovirt-engine-tools-backup-4.3.6.6-1.el7.* 0:ovirt-engine-restapi-4.3.6.6-1.el7.* 0:ovirt-engine-dbscripts-4.3.6.6-1.el7.* 0:ovirt-engine-4.3.6.6-1.el7.* 0:ovirt-engine-backend-4.3.6.6-1.el7.* 0:ovirt-engine-wildfly-17.0.1-1.el7.* 0:ovirt-engine-wildfly-overlay-17.0.1-1.el7.* 0:ovirt-engine-tools-4.3.6.6-1.el7.* versionlock status done [g.cecchi@ovmgr1 ~]$ 2) reboot 3) update oVirt NOTE: probably non need d update setup packages, because put in in previous update phase, correct? 4) eventually yum update again to see if any packages due to new repo conf 5) reboot of engine hosts 6) put into maintenance 7) simply yum update that will update CentOS packages + oVirt ones (vdsm and such..) Do you agree or have alternative path? Thanks, Gianluca

Il giorno ven 27 set 2019 alle ore 12:15 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Thu, Sep 26, 2019 at 5:23 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Does this mean that CentOS 7.6 is not supported any more starting from 4.3.6? Due to the fact that eg 4.3.5 was only supported on CentOS < 7.7 how should it be the correct flow of updating OS and oVirt versions in this case? Both for plain CentOS hosts and engine...
4.3.5 will work with CentOS 7.7 too.
https://lists.ovirt.org/archives/list/announce@ovirt.org/thread/DJF37K7TQFTR... says 7.6 or later but < 8
Sure, but at date of announce of oVirt 4.3.5 (30/07), CentOS 7.7 was not available yet (17/09)... so that phrase was a bit speculation... ;-)
In my case I currently have both my separate server engine and 3 plain hosts in CentOS 7.6 + 4.3.5 What would it be the workflow? Did you test this scenario, that I think will be very common with users that upgrades often (eg in their test env)?
engine 1) yum update to update OS I think versionlock plugin of oVirt will prevent update of its core parts, correct?
correct, version lock will prevent core oVirt packages to be updated.
I see many ovirt related packages put in. See here:
https://drive.google.com/file/d/1IvvwfJGgzdn6qkrI7d-WBAShRxUTdC1g/view?usp=s...
versionlock prevented:
[g.cecchi@ovmgr1 ~]$ sudo yum versionlock status Loaded plugins: fastestmirror, langpacks, versionlock Repository centos-sclo-rh-release is listed more than once in the configuration Loading mirror speeds from cached hostfile * base: centos.mirror.garr.it * epel-util: epel.besthosting.ua * extras: centos.mirror.garr.it * ovirt-4.3: ftp.nluug.nl * ovirt-4.3-epel: epel.besthosting.ua * updates: centos.mirror.garr.it 0:ovirt-engine-webadmin-portal-4.3.6.6-1.el7.* 0:ovirt-engine-ui-extensions-1.0.10-1.el7.* 0:ovirt-engine-dwh-4.3.6-1.el7.* 0:ovirt-engine-tools-backup-4.3.6.6-1.el7.* 0:ovirt-engine-restapi-4.3.6.6-1.el7.* 0:ovirt-engine-dbscripts-4.3.6.6-1.el7.* 0:ovirt-engine-4.3.6.6-1.el7.* 0:ovirt-engine-backend-4.3.6.6-1.el7.* 0:ovirt-engine-wildfly-17.0.1-1.el7.* 0:ovirt-engine-wildfly-overlay-17.0.1-1.el7.* 0:ovirt-engine-tools-4.3.6.6-1.el7.* versionlock status done [g.cecchi@ovmgr1 ~]$
2) reboot
3) update oVirt NOTE: probably non need d update setup packages, because put in in previous update phase, correct?
correct, setup packages will be updated in previous loop, just run engine-setup here
4) eventually yum update again to see if any packages due to new repo conf
shouldn't be needed but no harm in doing it.
5) reboot of engine
engine will be already restarted by engine-setup. If there are no new updates at kernel level no need to reboot again.
hosts 6) put into maintenance 7) simply yum update that will update CentOS packages + oVirt ones (vdsm and such..)
Please use the engine to upgrade hosts, there's a command in webadmin interface for that. It's *a bit* outdated, but still valid: https://ovirt.org/documentation/upgrade-guide/upgrade-guide.html
Do you agree or have alternative path? Thanks, Gianluca
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

On Fri, September 27, 2019 6:41 am, Sandro Bonazzola wrote: [snip]
hosts 6) put into maintenance 7) simply yum update that will update CentOS packages + oVirt ones (vdsm and such..)
Please use the engine to upgrade hosts, there's a command in webadmin interface for that.
I didn't think you could do this in a single-host hosted-engine system? In such a deployment the engine has nowhere to migrate to, so it requires shutting down the whole "data center" in order to upgrade the host. I didn't think that could be done via the engine? Personally, I still need to upgrade from 4.1.9 / CentOS 7.4!
It's *a bit* outdated, but still valid: https://ovirt.org/documentation/upgrade-guide/upgrade-guide.html
-derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

Il giorno ven 27 set 2019 alle ore 12:55 Derek Atkins <derek@ihtfp.com> ha scritto:
On Fri, September 27, 2019 6:41 am, Sandro Bonazzola wrote: [snip]
hosts 6) put into maintenance 7) simply yum update that will update CentOS packages + oVirt ones (vdsm and such..)
Please use the engine to upgrade hosts, there's a command in webadmin interface for that.
I didn't think you could do this in a single-host hosted-engine system? In such a deployment the engine has nowhere to migrate to, so it requires shutting down the whole "data center" in order to upgrade the host. I didn't think that could be done via the engine?
Personally, I still need to upgrade from 4.1.9 / CentOS 7.4!
Single host self hosted engine will require more work. You'll need to put the host in global maintenance, turn off the engine, yum upgrade the host and reboot. Then get out of global maintenance and engine VM should get back up and running in a few minutes.
It's *a bit* outdated, but still valid: https://ovirt.org/documentation/upgrade-guide/upgrade-guide.html
-derek
-- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

HI, On Fri, September 27, 2019 7:23 am, Sandro Bonazzola wrote:
Il giorno ven 27 set 2019 alle ore 12:55 Derek Atkins <derek@ihtfp.com> ha scritto:
Please use the engine to upgrade hosts, there's a command in webadmin interface for that.
I didn't think you could do this in a single-host hosted-engine system? In such a deployment the engine has nowhere to migrate to, so it requires shutting down the whole "data center" in order to upgrade the host. I didn't think that could be done via the engine?
Personally, I still need to upgrade from 4.1.9 / CentOS 7.4!
Single host self hosted engine will require more work. You'll need to put the host in global maintenance, turn off the engine, yum upgrade the host and reboot. Then get out of global maintenance and engine VM should get back up and running in a few minutes.
Yeah, this is how I've done it in the past. I'm curious what the steps should be going from 4.1.9 / EL7.4 to 4.3.x / EL7.7? I am pretty sure I need some steps along the way (I doubt I can jump directly from 4.1.9 -> 4.3.x and 7.4 -> 7.7, right). So should I jump from 7.4/4.1.9 to 7.6/4.2.8 and then from there to 7.7/4.3.6? Thanks, -derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

Il giorno ven 27 set 2019 alle ore 17:35 Derek Atkins <derek@ihtfp.com> ha scritto:
HI,
On Fri, September 27, 2019 7:23 am, Sandro Bonazzola wrote:
Il giorno ven 27 set 2019 alle ore 12:55 Derek Atkins <derek@ihtfp.com> ha scritto:
Please use the engine to upgrade hosts, there's a command in webadmin interface for that.
I didn't think you could do this in a single-host hosted-engine system? In such a deployment the engine has nowhere to migrate to, so it requires shutting down the whole "data center" in order to upgrade the host. I didn't think that could be done via the engine?
Personally, I still need to upgrade from 4.1.9 / CentOS 7.4!
Single host self hosted engine will require more work. You'll need to put the host in global maintenance, turn off the engine, yum upgrade the host and reboot. Then get out of global maintenance and engine VM should get back up and running in a few minutes.
Yeah, this is how I've done it in the past.
I'm curious what the steps should be going from 4.1.9 / EL7.4 to 4.3.x / EL7.7? I am pretty sure I need some steps along the way (I doubt I can jump directly from 4.1.9 -> 4.3.x and 7.4 -> 7.7, right).
So should I jump from 7.4/4.1.9 to 7.6/4.2.8 and then from there to 7.7/4.3.6?
4.1 cluster level is still supported by 4.3 engine. So you can upgrade the engine from 7.4/4.1.9 to 7.6/4.2.8 and then to 7.7/4.3.6 while on the host side you can go straight to 4.3.6/7.7. Once done, please update cluster level to 4.3.
Thanks,
-derek
-- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

On Fri, September 27, 2019 11:46 am, Sandro Bonazzola wrote: [nsip]
I'm curious what the steps should be going from 4.1.9 / EL7.4 to 4.3.x / EL7.7? I am pretty sure I need some steps along the way (I doubt I can jump directly from 4.1.9 -> 4.3.x and 7.4 -> 7.7, right).
So should I jump from 7.4/4.1.9 to 7.6/4.2.8 and then from there to 7.7/4.3.6?
4.1 cluster level is still supported by 4.3 engine. So you can upgrade the engine from 7.4/4.1.9 to 7.6/4.2.8 and then to 7.7/4.3.6 while on the host side you can go straight to 4.3.6/7.7. Once done, please update cluster level to 4.3.
Excellent, I can do that. I just need to ensure that the cluster settings fully upgraded from 4.0 to 4.1. One final question: I know that ovirt-shell is deprecated, but is it still available in 4.3.x? Thanks for all your support! -derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

Il giorno ven 27 set 2019 alle ore 17:54 Derek Atkins <derek@ihtfp.com> ha scritto:
On Fri, September 27, 2019 11:46 am, Sandro Bonazzola wrote: [nsip]
I'm curious what the steps should be going from 4.1.9 / EL7.4 to 4.3.x / EL7.7? I am pretty sure I need some steps along the way (I doubt I can jump directly from 4.1.9 -> 4.3.x and 7.4 -> 7.7, right).
So should I jump from 7.4/4.1.9 to 7.6/4.2.8 and then from there to 7.7/4.3.6?
4.1 cluster level is still supported by 4.3 engine. So you can upgrade the engine from 7.4/4.1.9 to 7.6/4.2.8 and then to 7.7/4.3.6 while on the host side you can go straight to 4.3.6/7.7. Once done, please update cluster level to 4.3.
Excellent, I can do that. I just need to ensure that the cluster settings fully upgraded from 4.0 to 4.1.
One final question: I know that ovirt-shell is deprecated, but is it still available in 4.3.x?
Yes, it's still available. It will be dropped in 4.4.
Thanks for all your support!
-derek -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

September 30, 2019 2:23 PM, "Sandro Bonazzola" <sbonazzo@redhat.com> wrote:
Il giorno ven 27 set 2019 alle ore 17:54 Derek Atkins <derek@ihtfp.com> ha scritto:
On Fri, September 27, 2019 11:46 am, Sandro Bonazzola wrote: [nsip]
I'm curious what the steps should be going from 4.1.9 / EL7.4 to 4.3.x / EL7.7? I am pretty sure I need some steps along the way (I doubt I can jump directly from 4.1.9 -> 4.3.x and 7.4 -> 7.7, right).
So should I jump from 7.4/4.1.9 to 7.6/4.2.8 and then from there to 7.7/4.3.6?
4.1 cluster level is still supported by 4.3 engine. So you can upgrade the engine from 7.4/4.1.9 to 7.6/4.2.8 and then to 7.7/4.3.6 while on the host side you can go straight to 4.3.6/7.7. Once done, please update cluster level to 4.3.
Excellent, I can do that. I just need to ensure that the cluster settings fully upgraded from 4.0 to 4.1.
One final question: I know that ovirt-shell is deprecated, but is it still available in 4.3.x?
Yes, it's still available. It will be dropped in 4.4.
OK, good to know, time to polish up my ansible or start writing api scripts. Now that 4.4 popped up, how is that going? I looked a bit at the Gerrit yesterday and right now and see that el8 builds are being done now, great work! Greetings, Joop

Il giorno mar 1 ott 2019 alle ore 09:48 <jvdwege@xs4all.nl> ha scritto:
September 30, 2019 2:23 PM, "Sandro Bonazzola" <sbonazzo@redhat.com> wrote:
Il giorno ven 27 set 2019 alle ore 17:54 Derek Atkins <derek@ihtfp.com> ha scritto:
On Fri, September 27, 2019 11:46 am, Sandro Bonazzola wrote: [nsip]
I'm curious what the steps should be going from 4.1.9 / EL7.4 to 4.3.x / EL7.7? I am pretty sure I need some steps along the way (I doubt I can jump directly from 4.1.9 -> 4.3.x and 7.4 -> 7.7, right).
So should I jump from 7.4/4.1.9 to 7.6/4.2.8 and then from there to 7.7/4.3.6?
4.1 cluster level is still supported by 4.3 engine. So you can upgrade the engine from 7.4/4.1.9 to 7.6/4.2.8 and then to 7.7/4.3.6 while on the host side you can go straight to 4.3.6/7.7. Once done, please update cluster level to 4.3.
Excellent, I can do that. I just need to ensure that the cluster settings fully upgraded from 4.0 to 4.1.
One final question: I know that ovirt-shell is deprecated, but is it still available in 4.3.x?
Yes, it's still available. It will be dropped in 4.4.
OK, good to know, time to polish up my ansible or start writing api scripts.
Now that 4.4 popped up, how is that going? I looked a bit at the Gerrit yesterday and right now and see that el8 builds are being done now, great work!
Pushing builds to el8 and fc30. A few big show stoppers like the lack of ansible in EPEL 8 (Bug 1744975 <https://bugzilla.redhat.com/show_bug.cgi?id=1744975>) and CentOS Community Build System not yet enabled to build for CentOS 8, but we are working on it.
Greetings,
Joop _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DAEUKE62ZOHVFR...
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours.*

jvdwege@xs4all.nl writes:
Yes, it's still available. It will be dropped in 4.4.
OK, good to know, time to polish up my ansible or start writing api scripts.
Now that 4.4 popped up, how is that going? I looked a bit at the Gerrit yesterday and right now and see that el8 builds are being done now, great work!
Yeah. I've got a startup script that I use to start all my VMs (see below). I'll need to figure out how to migrate that script to SDK4. It really sucks that there's no SDK4 version of ovirt-shell. I suspect my script will expand by an order of magnitude, and everyone who has written a script around ovirt-shell will have to duplicate effort. I know there is a feature for the engine to autostart VMs (which I believe will be in 4.4), but AFAIK it doesn't do ordering. I need at least one specific VM to start up before everything else. Thanks, -derek #!/bin/bash [ -f /etc/sysconfig/vm_list ] || exit 0 . /etc/sysconfig/vm_list echo -n "Starting at " date # Wait for the engine to respond while [ `ovirt-shell -I -c -F -T 50 -E ping 2>/dev/null | grep -c success` != 1 ] do echo "Not ready... Sleeping..." sleep 60 done # Now wait for the storage domain to appear active echo -n "Engine up. Searching for disks at " date total_disks=`ovirt-shell -I -c -E summary | grep storage_domains-total | sed -e 's/.*: //'` # subtract one because we know we're not using the image-repository total_disks=`expr $total_disks - 1` active_disks=`ovirt-shell -I -c -E summary | grep storage_domains-active | sed -e 's/.*: //'` while [ $active_disks -lt $total_disks ] do echo "Storage Domains not active yet. Only found $active_disks/$total_disks. Waiting..." sleep 60 active_disks=`ovirt-shell -I -c -E summary | grep storage_domains-active | sed -e 's/.*: //'` done # Now start all of the VMs in the requested order. echo -n "All storage mounted. Starting VMs at " date for vm in "${vm_list[@]}" do timeout=${vm_timeout[$vm]:-$default_timeout} ovirt-shell -I -c -E "action vm $vm start" sleep "$timeout" done -- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

I got a three-node and a single-node HCI that I upgraded from 4.3.5 on CentOS 7.6 to 4.3.6 on CentOS 7.7. The three-node update worked like a charm, mostly just using the GUI: VMs got properly migrated but the SPM wasn't, I'm afraid, causing a re-election and some Gluster healing wobbles I had to iron out. The single node updates generally are much more painful than you describe. I find that hosted-engine is complaining about the storage being inaccessible and I need to restart the Gluster daemon to have gluster volume status all show TCP ports. I then generally restart the ovirt-ha-broker and agent until they stop complaining, I might do hosted-engine --connect-storage etc. until eventually hosted-engine --vm-status is at least no longer complaining about lack of storage. I can then start the management engine and leave maintenance mode. BTW: With the ovirt 4.3.6 update came a new hosted-engine template image so I guessed running an update on the management engine VM would be in order: At that point I noticed a rather useful message, that engine-setup should be re-run as part of the upgrade, which then again tried to pull various updates (that should have been already satisfied at that point). I guess the point I am trying to make is that while three-node host updates are wonderfully served by the GUI, there is a stiff decline in UX ergonomics when it comes to single node (which has limited usefulness, I understand) but also the management engine: Updates of the latter may be less frequent, but could either use some dedicated step-by-step documentation or UX support.

On Sat, Oct 5, 2019 at 3:32 PM <thomas@hoberg.net> wrote:
I got a three-node and a single-node HCI that I upgraded from 4.3.5 on CentOS 7.6 to 4.3.6 on CentOS 7.7.
The three-node update worked like a charm, mostly just using the GUI: VMs got properly migrated but the SPM wasn't, I'm afraid, causing a re-election and some Gluster healing wobbles I had to iron out.
The single node updates generally are much more painful than you describe. I find that hosted-engine is complaining about the storage being inaccessible and I need to restart the Gluster daemon to have gluster volume status all show TCP ports. I then generally restart the ovirt-ha-broker and agent until they stop complaining, I might do hosted-engine --connect-storage etc. until eventually hosted-engine --vm-status is at least no longer complaining about lack of storage.
I can then start the management engine and leave maintenance mode.
BTW: With the ovirt 4.3.6 update came a new hosted-engine template image so I guessed running an update on the management engine VM would be in order: At that point I noticed a rather useful message, that engine-setup should be re-run as part of the upgrade, which then again tried to pull various updates (that should have been already satisfied at that point).
engine-setup locks with yum/dnf versionlock plugin the RPMs of the engine itself, so that it can both upgrade them and the database schema (and sometimes other things) all at once. IIRC some years ago there were discussions about allowing upgrade from the UI. I think this didn't go anywhere, probably because it wasn't easy, and didn't seem worth it. The notice you saw to run engine-setup is actually a new 4.3 feature - glad it helped you :-) https://gerrit.ovirt.org/96446
I guess the point I am trying to make is that while three-node host updates are wonderfully served by the GUI, there is a stiff decline in UX ergonomics when it comes to single node (which has limited usefulness, I understand) but also the management engine: Updates of the latter may be less frequent, but could either use some dedicated step-by-step documentation or UX support. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FU63CAAT5XNJJE...
-- Didi

On Fri, Sep 27, 2019 at 12:42 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote: [snip
engine 1) yum update to update OS I think versionlock plugin of oVirt will prevent update of its core parts, correct?
correct, version lock will prevent core oVirt packages to be updated.
[snip]
2) reboot
3) update oVirt NOTE: probably non need d update setup packages, because put in in previous update phase, correct?
correct, setup packages will be updated in previous loop, just run engine-setup here
4) eventually yum update again to see if any packages due to new repo conf
shouldn't be needed but no harm in doing it.
In fact I didn't get anything
5) reboot of engine
engine will be already restarted by engine-setup. If there are no new updates at kernel level no need to reboot again.
just to replicate a future scenario of rebooting and see that previous time all went up ok
hosts 6) put into maintenance 7) simply yum update that will update CentOS packages + oVirt ones (vdsm and such..)
Please use the engine to upgrade hosts, there's a command in webadmin interface for that.
It's *a bit* outdated, but still valid: https://ovirt.org/documentation/upgrade-guide/upgrade-guide.html
I tried and went well (at least for the first host) as a final result, but the events inside web admin gui don't seem to be well coordinated... See below the sequence of events I got after selecting Installation --> Upgrade (and checking the box to migrate running VMs) one vm running on it was correctly migrated and then the host put into maintenance, but then it seems to me that the following update of vdsmd or other subsysystems tried to got it up again. In fact I saw in the gui the host coming up, then down, then non operational (the X in the red square) Then the host rebooted and came in up state again and I was able to manually migrate a VM into it. One other thing to improve in my opinion is that the upgrade of the host from engine should inject the job that normally once a day checks if a host has available updates: it seems somehow quirky that you pilot host upgrade form engine and engine itself doesn't know that the host has been upgraded (till tomorrow of course...) Thanks, Gianluca Host ov200 upgrade was started (User: user1@my_domain@my_domain). 9/27/19 3:55:10 PM Migration initiated by system (VM: hostcopy1, Source: ov200, Destination: ov301, Reason: ). 9/27/19 3:55:11 PM Host ov200 was switched to Maintenance Mode. 9/27/19 3:55:11 PM Migration completed (VM: hostcopy1, Source: ov200, Destination: ov301, Duration: 13 seconds, Total: 20 seconds, Actual downtime: 88ms) 9/27/19 3:55:31 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 22501 ms ago. 9/27/19 3:57:03 PM Host ov200 is not responding. It will stay in Connecting state for a grace period of 60 seconds and after that an attempt to fence the host will be issued. 9/27/19 3:57:03 PM .. Host ov200 is non responsive. 9/27/19 3:57:09 PM Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 3:57:09 PM Soft fencing on host ov200 was successful. 9/27/19 3:57:19 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 16876 ms ago. 9/27/19 3:59:03 PM Host ov200 is non responsive. 9/27/19 3:59:03 PM Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 3:59:04 PM No faulty multipath paths on host ov200 9/27/19 3:59:04 PM Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 3:59:04 PM Status of host ov200 was set to NonResponsive. 9/27/19 3:59:17 PM .. Host ov200 is not responding. It will stay in Connecting state for a grace period of 60 seconds and after that an attempt to fence the host will be issued. 9/27/19 4:00:24 PM Soft fencing on host ov200 was successful. 9/27/19 4:00:38 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 7121 ms ago. 9/27/19 4:00:56 PM .. Host ov200 was restarted using SSH by the engine. 9/27/19 4:02:31 PM Upgrade was successful and host ov200 will be rebooted. 9/27/194:02:31 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 5341 ms ago. 9/27/19 4:02:46 PM Host ov200 cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center MYDC. Setting Host state to Non-Operational. 9/27/19 4:02:46 PM Failed to connect Host ov200 to Storage Pool MYDC 9/27/19 4:02:46 PM ... Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 4:12:27 PM Status of host ov200 was set to Up. 9/27/19 4:12:27 PM Host ov200 power management was verified successfully. 9/27/19 4:12:27 PM

Il giorno ven 27 set 2019 alle ore 16:44 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Fri, Sep 27, 2019 at 12:42 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
[snip
engine 1) yum update to update OS I think versionlock plugin of oVirt will prevent update of its core parts, correct?
correct, version lock will prevent core oVirt packages to be updated.
[snip]
2) reboot
3) update oVirt NOTE: probably non need d update setup packages, because put in in previous update phase, correct?
correct, setup packages will be updated in previous loop, just run engine-setup here
4) eventually yum update again to see if any packages due to new repo conf
shouldn't be needed but no harm in doing it.
In fact I didn't get anything
5) reboot of engine
engine will be already restarted by engine-setup. If there are no new updates at kernel level no need to reboot again.
just to replicate a future scenario of rebooting and see that previous time all went up ok
hosts 6) put into maintenance 7) simply yum update that will update CentOS packages + oVirt ones (vdsm and such..)
Please use the engine to upgrade hosts, there's a command in webadmin interface for that.
It's *a bit* outdated, but still valid: https://ovirt.org/documentation/upgrade-guide/upgrade-guide.html
I tried and went well (at least for the first host) as a final result, but the events inside web admin gui don't seem to be well coordinated...
See below the sequence of events I got after selecting Installation --> Upgrade (and checking the box to migrate running VMs)
one vm running on it was correctly migrated and then the host put into maintenance, but then it seems to me that the following update of vdsmd or other subsysystems tried to got it up again. In fact I saw in the gui the host coming up, then down, then non operational (the X in the red square) Then the host rebooted and came in up state again and I was able to manually migrate a VM into it. One other thing to improve in my opinion is that the upgrade of the host from engine should inject the job that normally once a day checks if a host has available updates: it seems somehow quirky that you pilot host upgrade form engine and engine itself doesn't know that the host has been upgraded (till tomorrow of course...)
+Laura Wright <lwright@redhat.com> , +Martin Perina <mperina@redhat.com> can you please look into this feedback?
Thanks, Gianluca
Host ov200 upgrade was started (User: user1@my_domain@my_domain). 9/27/19 3:55:10 PM Migration initiated by system (VM: hostcopy1, Source: ov200, Destination: ov301, Reason: ). 9/27/19 3:55:11 PM Host ov200 was switched to Maintenance Mode. 9/27/19 3:55:11 PM Migration completed (VM: hostcopy1, Source: ov200, Destination: ov301, Duration: 13 seconds, Total: 20 seconds, Actual downtime: 88ms) 9/27/19 3:55:31 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 22501 ms ago. 9/27/19 3:57:03 PM Host ov200 is not responding. It will stay in Connecting state for a grace period of 60 seconds and after that an attempt to fence the host will be issued. 9/27/19 3:57:03 PM .. Host ov200 is non responsive. 9/27/19 3:57:09 PM Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 3:57:09 PM Soft fencing on host ov200 was successful. 9/27/19 3:57:19 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 16876 ms ago. 9/27/19 3:59:03 PM Host ov200 is non responsive. 9/27/19 3:59:03 PM Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 3:59:04 PM No faulty multipath paths on host ov200 9/27/19 3:59:04 PM Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 3:59:04 PM Status of host ov200 was set to NonResponsive. 9/27/19 3:59:17 PM .. Host ov200 is not responding. It will stay in Connecting state for a grace period of 60 seconds and after that an attempt to fence the host will be issued. 9/27/19 4:00:24 PM Soft fencing on host ov200 was successful. 9/27/19 4:00:38 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 7121 ms ago. 9/27/19 4:00:56 PM .. Host ov200 was restarted using SSH by the engine. 9/27/19 4:02:31 PM Upgrade was successful and host ov200 will be rebooted. 9/27/194:02:31 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 5341 ms ago. 9/27/19 4:02:46 PM Host ov200 cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center MYDC. Setting Host state to Non-Operational. 9/27/19 4:02:46 PM Failed to connect Host ov200 to Storage Pool MYDC 9/27/19 4:02:46 PM ... Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 4:12:27 PM Status of host ov200 was set to Up. 9/27/19 4:12:27 PM Host ov200 power management was verified successfully. 9/27/19 4:12:27 PM
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

We've heard similar feedback around events before so it's definitely on our radar. I'm hoping we can address some of these issues when we start the PatternFly 4 design efforts around events. On Fri, Sep 27, 2019 at 11:07 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno ven 27 set 2019 alle ore 16:44 Gianluca Cecchi < gianluca.cecchi@gmail.com> ha scritto:
On Fri, Sep 27, 2019 at 12:42 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
[snip
engine 1) yum update to update OS I think versionlock plugin of oVirt will prevent update of its core parts, correct?
correct, version lock will prevent core oVirt packages to be updated.
[snip]
2) reboot
3) update oVirt NOTE: probably non need d update setup packages, because put in in previous update phase, correct?
correct, setup packages will be updated in previous loop, just run engine-setup here
4) eventually yum update again to see if any packages due to new repo conf
shouldn't be needed but no harm in doing it.
In fact I didn't get anything
5) reboot of engine
engine will be already restarted by engine-setup. If there are no new updates at kernel level no need to reboot again.
just to replicate a future scenario of rebooting and see that previous time all went up ok
hosts 6) put into maintenance 7) simply yum update that will update CentOS packages + oVirt ones (vdsm and such..)
Please use the engine to upgrade hosts, there's a command in webadmin interface for that.
It's *a bit* outdated, but still valid: https://ovirt.org/documentation/upgrade-guide/upgrade-guide.html
I tried and went well (at least for the first host) as a final result, but the events inside web admin gui don't seem to be well coordinated...
See below the sequence of events I got after selecting Installation --> Upgrade (and checking the box to migrate running VMs)
one vm running on it was correctly migrated and then the host put into maintenance, but then it seems to me that the following update of vdsmd or other subsysystems tried to got it up again. In fact I saw in the gui the host coming up, then down, then non operational (the X in the red square) Then the host rebooted and came in up state again and I was able to manually migrate a VM into it. One other thing to improve in my opinion is that the upgrade of the host from engine should inject the job that normally once a day checks if a host has available updates: it seems somehow quirky that you pilot host upgrade form engine and engine itself doesn't know that the host has been upgraded (till tomorrow of course...)
+Laura Wright <lwright@redhat.com> , +Martin Perina <mperina@redhat.com> can you please look into this feedback?
Thanks, Gianluca
Host ov200 upgrade was started (User: user1@my_domain@my_domain). 9/27/19 3:55:10 PM Migration initiated by system (VM: hostcopy1, Source: ov200, Destination: ov301, Reason: ). 9/27/19 3:55:11 PM Host ov200 was switched to Maintenance Mode. 9/27/19 3:55:11 PM Migration completed (VM: hostcopy1, Source: ov200, Destination: ov301, Duration: 13 seconds, Total: 20 seconds, Actual downtime: 88ms) 9/27/19 3:55:31 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 22501 ms ago. 9/27/19 3:57:03 PM Host ov200 is not responding. It will stay in Connecting state for a grace period of 60 seconds and after that an attempt to fence the host will be issued. 9/27/19 3:57:03 PM .. Host ov200 is non responsive. 9/27/19 3:57:09 PM Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 3:57:09 PM Soft fencing on host ov200 was successful. 9/27/19 3:57:19 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 16876 ms ago. 9/27/19 3:59:03 PM Host ov200 is non responsive. 9/27/19 3:59:03 PM Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 3:59:04 PM No faulty multipath paths on host ov200 9/27/19 3:59:04 PM Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 3:59:04 PM Status of host ov200 was set to NonResponsive. 9/27/19 3:59:17 PM .. Host ov200 is not responding. It will stay in Connecting state for a grace period of 60 seconds and after that an attempt to fence the host will be issued. 9/27/19 4:00:24 PM Soft fencing on host ov200 was successful. 9/27/19 4:00:38 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 7121 ms ago. 9/27/19 4:00:56 PM .. Host ov200 was restarted using SSH by the engine. 9/27/19 4:02:31 PM Upgrade was successful and host ov200 will be rebooted. 9/27/194:02:31 PM VDSM ov200 command ConnectStorageServerVDS failed: Connection timeout for host 'ov200.my_domain', last response arrived 5341 ms ago. 9/27/19 4:02:46 PM Host ov200 cannot access the Storage Domain(s) <UNKNOWN> attached to the Data Center MYDC. Setting Host state to Non-Operational. 9/27/19 4:02:46 PM Failed to connect Host ov200 to Storage Pool MYDC 9/27/19 4:02:46 PM ... Executing power management status on Host ov200 using Proxy Host ov301 and Fence Agent ipmilan:10.4.192.66. 9/27/19 4:12:27 PM Status of host ov200 was set to Up. 9/27/19 4:12:27 PM Host ov200 power management was verified successfully. 9/27/19 4:12:27 PM
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
-- Laura Wright She/Her/Hers UXD Team Red Hat Massachusetts <https://www.redhat.com/> 314 Littleton Rd lwright@redhat.com <https://www.redhat.com/>

On Fri, Sep 27, 2019 at 5:25 PM Laura Wright <lwright@redhat.com> wrote: > We've heard similar feedback around events before so it's definitely on > our radar. I'm hoping we can address some of these issues when we start the > PatternFly 4 design efforts around events. > >> >> [snip] > +Laura Wright <lwright@redhat.com> , +Martin Perina <mperina@redhat.com> can >> you please look into this feedback? >> >> >> Just for reference, if I go with plain host update with manual steps, instead than selecting "Upgrade" from the gui, the steps are smoother. - I select host and put it into maintenance Events: Migration initiated by system (VM: enginecopy1, Source: ov301, Destination: ov300, Reason: ). 9/27/19 5:15:09 PM Host ov301 was switched to Maintenance mode by user1@my_domain@my_domain (Reason: upgrade). 9/27/19 5:15:09 PM Migration completed (VM: enginecopy1, Source: ov301, Destination: ov300, Duration: 2 seconds, Total: 14 seconds, Actual downtime: 75ms) 9/27/19 5:15:24 PM - on host: yum update Transaction Summary ====================================================================================================================== Install 1 Package (+10 Dependent packages) Upgrade 404 Packages Remove 1 Package Total download size: 393 M Is this ok [y/d/N]: ... yum-utils.noarch 0:1.1.31-52.el7 Complete! [g.cecchi@ov301 ~]$ In the mean time the icon of the host ov301 remained all time in maintenance mode, without switching to any other state - From web admin gui select host --> Management --> SSH Restart In events: Host ov301 was restarted using SSH by the engine. 9/27/19 5:26:45 PM Icon of host become hourglass - When host finishes rebooting, the icon becomes the one of the maintenance again (the wrench, as expected). No events in the mean time - select host --> Management --> Activate In the events I see: Activation of host ov301 initiated by user1@my_domain@my_domain. 9/27/19 5:34:18 PM No faulty multipath paths on host ov301 9/27/19 5:36:44 PM Executing power management status on Host ov301 using Proxy Host ov300 and Fence Agent ipmilan:10.10.193.104 . 9/27/19 5:36:45 PM Status of host ov301 was set to Up. 9/27/19 5:36:45 PM Host ov301 power management was verified successfully. 9/27/19 5:36:45 PM And the host icon becomes the "Up" one... HIH making the upgrade from GUI a better experience form ovirt events point of view. Gianluca

I see that oVirt 4.3.6 finally has 4k domain support. - Would that mean that VDO enabled Gluster domains will be created without the --emulate512 workaround? - If the wizard to create the Gluster volumes has not yet removed that parameter, is it safe to edit & remove it manually before creation? - Should we expect performance increase by using the native 4k block size of VDO? Thanks Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Fri, Sep 27, 2019 at 12:00 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly 17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7 updates including:
-
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html> -
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
-
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html> -
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html> -
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html> -
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html> -
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html> -
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html> -
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html> -
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html> -
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html> -
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html> -
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html> -
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
-
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html> -
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html> -
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html> -
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
-
Ansible 2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8... -
Glusterfs 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/ -
QEMU KVM EV 2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/ [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>* _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
-- Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>

Il giorno ven 27 set 2019 alle ore 05:21 Guillaume Pavese < guillaume.pavese@interactiv-group.com> ha scritto:
I see that oVirt 4.3.6 finally has 4k domain support.
- Would that mean that VDO enabled Gluster domains will be created without the --emulate512 workaround? - If the wizard to create the Gluster volumes has not yet removed that parameter, is it safe to edit & remove it manually before creation? - Should we expect performance increase by using the native 4k block size of VDO?
+Nir Soffer <nsoffer@redhat.com> +Sundaramoorthi, Satheesaran <sasundar@redhat.com> +Gobinda Das <godas@redhat.com> can you answer here?
Thanks
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Fri, Sep 27, 2019 at 12:00 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly 17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7 updates including:
-
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html> -
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
-
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html> -
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html> -
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html> -
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html> -
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html> -
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html> -
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html> -
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html> -
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html> -
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html> -
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html> -
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
-
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html> -
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html> -
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html> -
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
-
Ansible 2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8... -
Glusterfs 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/ -
QEMU KVM EV 2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/ [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>* _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

Hi all, Sorry for asking again :/ Is there any consensus on not using --emulate512 anymore while creating VDO volumes on Gluster? Since this parameter can not be changed once the volume is created and we are nearing production setup. I would really like to have an official advice on this. Best, Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group On Fri, Sep 27, 2019 at 3:19 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno ven 27 set 2019 alle ore 05:21 Guillaume Pavese < guillaume.pavese@interactiv-group.com> ha scritto:
I see that oVirt 4.3.6 finally has 4k domain support.
- Would that mean that VDO enabled Gluster domains will be created without the --emulate512 workaround? - If the wizard to create the Gluster volumes has not yet removed that parameter, is it safe to edit & remove it manually before creation? - Should we expect performance increase by using the native 4k block size of VDO?
+Nir Soffer <nsoffer@redhat.com> +Sundaramoorthi, Satheesaran <sasundar@redhat.com> +Gobinda Das <godas@redhat.com> can you answer here?
Thanks
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Fri, Sep 27, 2019 at 12:00 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly 17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7 updates including:
-
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html> -
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
-
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html> -
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html> -
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html> -
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html> -
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html> -
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html> -
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html> -
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html> -
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html> -
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html> -
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html> -
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
-
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html> -
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html> -
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html> -
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
-
Ansible 2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8... -
Glusterfs 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/ -
QEMU KVM EV 2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/ [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>* _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
-- Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>

On Tue, Oct 1, 2019 at 11:27 AM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
Hi all,
Sorry for asking again :/
Is there any consensus on not using --emulate512 anymore while creating VDO volumes on Gluster? Since this parameter can not be changed once the volume is created and we are nearing production setup. I would really like to have an official advice on this.
Best,
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
Hello Guillaume Pavese, If you are not using --emulate512 for VDO volume, then VDO volume will be created as 4K Native volume (with 4K block size).
There are couple of things that bothers here: 1. 4K Native device support requires fixes at QEMU that will be part of CentOS 7.7.2 ( not yet available ) 2. 4K Native support with VDO volumes on Gluster is not yet validated thoroughly. Based on the above items, it would be better you have emulate512=on or delay your production setup ( if possible, till above both items are addressed ) to make use of 4K VDO volume. @Sahina Bose <sabose@redhat.com> Do you have any other suggestions ? -- Satheesaran S (sas)
On Fri, Sep 27, 2019 at 3:19 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno ven 27 set 2019 alle ore 05:21 Guillaume Pavese < guillaume.pavese@interactiv-group.com> ha scritto:
I see that oVirt 4.3.6 finally has 4k domain support.
- Would that mean that VDO enabled Gluster domains will be created without the --emulate512 workaround? - If the wizard to create the Gluster volumes has not yet removed that parameter, is it safe to edit & remove it manually before creation? - Should we expect performance increase by using the native 4k block size of VDO?
+Nir Soffer <nsoffer@redhat.com> +Sundaramoorthi, Satheesaran <sasundar@redhat.com> +Gobinda Das <godas@redhat.com> can you answer here?
Thanks
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Fri, Sep 27, 2019 at 12:00 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly 17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7 updates including:
-
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html> -
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
-
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html> -
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html> -
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html> -
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html> -
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html> -
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html> -
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html> -
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html> -
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html> -
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html> -
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html> -
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
-
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html> -
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html> -
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html> -
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
-
Ansible 2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8... -
Glusterfs 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/
-
QEMU KVM EV 2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/ [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>* _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>

Hi Guillaume, As Satheesaran suggested the possible options which is completly valid points. Thanks sas for that. I would suggest to wait till 4Kn validation complet with Gluster + VDO (All usecases), if you want 4K support. Other wise you can go ahead with emulate512=on . On Tue, Oct 1, 2019 at 11:56 AM Satheesaran Sundaramoorthi < sasundar@redhat.com> wrote:
On Tue, Oct 1, 2019 at 11:27 AM Guillaume Pavese < guillaume.pavese@interactiv-group.com> wrote:
Hi all,
Sorry for asking again :/
Is there any consensus on not using --emulate512 anymore while creating VDO volumes on Gluster? Since this parameter can not be changed once the volume is created and we are nearing production setup. I would really like to have an official advice on this.
Best,
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
Hello Guillaume Pavese, If you are not using --emulate512 for VDO volume, then VDO volume will be created as 4K Native volume (with 4K block size).
There are couple of things that bothers here: 1. 4K Native device support requires fixes at QEMU that will be part of CentOS 7.7.2 ( not yet available ) 2. 4K Native support with VDO volumes on Gluster is not yet validated thoroughly.
Based on the above items, it would be better you have emulate512=on or delay your production setup ( if possible, till above both items are addressed ) to make use of 4K VDO volume.
@Sahina Bose <sabose@redhat.com> Do you have any other suggestions ?
-- Satheesaran S (sas)
On Fri, Sep 27, 2019 at 3:19 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno ven 27 set 2019 alle ore 05:21 Guillaume Pavese < guillaume.pavese@interactiv-group.com> ha scritto:
I see that oVirt 4.3.6 finally has 4k domain support.
- Would that mean that VDO enabled Gluster domains will be created without the --emulate512 workaround? - If the wizard to create the Gluster volumes has not yet removed that parameter, is it safe to edit & remove it manually before creation? - Should we expect performance increase by using the native 4k block size of VDO?
+Nir Soffer <nsoffer@redhat.com> +Sundaramoorthi, Satheesaran <sasundar@redhat.com> +Gobinda Das <godas@redhat.com> can you answer here?
Thanks
Guillaume Pavese Ingénieur Système et Réseau Interactiv-Group
On Fri, Sep 27, 2019 at 12:00 AM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly 17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7 updates including:
-
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html> -
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
-
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html> -
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html> -
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html> -
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html> -
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html> -
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html> -
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html> -
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html> -
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html> -
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html> -
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html> -
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
-
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html> -
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html> -
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html> -
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
-
Ansible 2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8... -
Glusterfs 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/ -
QEMU KVM EV 2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/ [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>* _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
Ce message et toutes les pièces jointes (ci-après le “message”) sont établis à l’intention exclusive de ses destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d’en avertir immédiatement l’expéditeur. Toute utilisation de ce message non conforme a sa destination, toute diffusion ou toute publication, totale ou partielle, est interdite, sauf autorisation expresse. L’internet ne permettant pas d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse ou il aurait été modifié. IT, ES, UK. <https://interactiv-group.com/disclaimer.html>
-- Thanks, Gobinda

Hi, After upgrading to 4.3.6, my storage domain can no longer be activated, rendering my data center useless. My storage domain is local storage on a filesystem backed by VDO/LVM. It seems 4.3.6 has added support for 4k storage. My VDO does not have the 'emulate512' flag set. I've tried downgrading all packages on the host to the previous versions (with ioprocess 1.2), but this does not seem to make any difference. Should I also downgrade the engine to 4.3.5 to get this to work again. I expected the downgrade of the host to be sufficient. As an alternative I guess I could enable the emulate512 flag on VDO but I can not find how to do this on an existing VDO volume. Is this possible? Regards, Rik On 9/26/19 4:58 PM, Sandro Bonazzola wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7updates including:
*
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html>
*
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
*
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html>
*
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html>
*
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html>
*
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html>
*
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html>
*
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html>
*
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html>
*
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html>
*
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html>
*
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html>
*
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html>
*
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
*
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html>
*
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html>
*
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html>
*
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
*
Ansible2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8...
*
Glusterfs6.5: https://docs.gluster.org/en/latest/release-notes/6.5/
*
QEMU KVM EV2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights:http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/
[2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>

Il giorno ven 27 set 2019 alle ore 11:31 Rik Theys < Rik.Theys@esat.kuleuven.be> ha scritto:
Hi,
After upgrading to 4.3.6, my storage domain can no longer be activated, rendering my data center useless.
My storage domain is local storage on a filesystem backed by VDO/LVM. It seems 4.3.6 has added support for 4k storage. My VDO does not have the 'emulate512' flag set.
I've tried downgrading all packages on the host to the previous versions (with ioprocess 1.2), but this does not seem to make any difference. Should I also downgrade the engine to 4.3.5 to get this to work again. I expected the downgrade of the host to be sufficient.
As an alternative I guess I could enable the emulate512 flag on VDO but I can not find how to do this on an existing VDO volume. Is this possible?
+Sahina Bose <sabose@redhat.com> +Gobinda Das <godas@redhat.com> +Sundaramoorthi, Satheesaran <sasundar@redhat.com> please follow up here.
Regards, Rik
On 9/26/19 4:58 PM, Sandro Bonazzola wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly 17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7 updates including:
-
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html> -
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
-
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html> -
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html> -
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html> -
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html> -
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html> -
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html> -
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html> -
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html> -
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html> -
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html> -
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html> -
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
-
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html> -
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html> -
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html> -
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
-
Ansible 2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8... -
Glusterfs 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/ -
QEMU KVM EV 2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/ [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/>*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

On Fri, Sep 27, 2019 at 4:13 PM Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il giorno ven 27 set 2019 alle ore 11:31 Rik Theys < Rik.Theys@esat.kuleuven.be> ha scritto:
Hi,
After upgrading to 4.3.6, my storage domain can no longer be activated, rendering my data center useless.
Hello Rik,
Did you get the exact engine log, why the storage domain can't be activated. ? Could you send the engine.log ?
My storage domain is local storage on a filesystem backed by VDO/LVM. It seems 4.3.6 has added support for 4k storage.
Do you have gluster in this combination of VDO/LVM ?
There are some QEMU fixes that is required for 4K storage to work with oVirt. These fixes will be part of CentOS 7.7 batch update 2
My VDO does not have the 'emulate512' flag set.
This means your VDO volume would have been configured as a 4K' native device by default. Was it this way, before updating to oVirt 4.3.6 ? I believe so, because you can't change the block size of VDO on the fly. So my guess is that you are running fortunate with 4K VDO volume
I've tried downgrading all packages on the host to the previous versions (with ioprocess 1.2), but this does not seem to make any difference. Should I also downgrade the engine to 4.3.5 to get this to work again. I expected the downgrade of the host to be sufficient.
As an alternative I guess I could enable the emulate512 flag on VDO but I can not find how to do this on an existing VDO volume. Is this possible?
As state above, this is not possible.
Once VDO volume is created, its block size can't be changed dynamically. -- Satheesaran S ( sas )

Hi, On Fri, Sep 27, 2019 at 3:07 PM Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
Hi,
After upgrading to 4.3.6, my storage domain can no longer be activated, rendering my data center useless.
My storage domain is local storage on a filesystem backed by VDO/LVM. It seems 4.3.6 has added support for 4k storage. My VDO does not have the 'emulate512' flag set.
I've tried downgrading all packages on the host to the previous versions (with ioprocess 1.2), but this does not seem to make any difference. Should I also downgrade the engine to 4.3.5 to get this to work again. I expected the downgrade of the host to be sufficient.
What does engine log saying? Does it sending block_size =0 or 512? If engine is sending 0 after you downgrade the hosts, then you need to downgrade engine too.
As an alternative I guess I could enable the emulate512 flag on VDO but I can not find how to do this on an existing VDO volume. Is this possible?
I don't think you can change emulate512 value once you create VDO volume.
Regards, Rik
On 9/26/19 4:58 PM, Sandro Bonazzola wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly 17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7 updates including:
-
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html> -
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
-
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html> -
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html> -
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html> -
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html> -
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html> -
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html> -
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html> -
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html> -
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html> -
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html> -
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html> -
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
-
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html> -
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html> -
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html> -
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
-
Ansible 2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8... -
Glusterfs 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/ -
QEMU KVM EV 2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/ [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPIYWV2OUNLNHU...
-- Thanks, Gobinda

On Fri, Sep 27, 2019, 12:37 Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
Hi,
After upgrading to 4.3.6, my storage domain can no longer be activated, rendering my data center useless.
My storage domain is local storage on a filesystem backed by VDO/LVM. It seems 4.3.6 has added support for 4k storage. My VDO does not have the 'emulate512' flag set.
This configuration is not supported before 4.3.6. Various operations may fail when reading or writing to storage. 4.3.6 detects storage block size, creates compatible storage domain metadata, and consider the block size when accessing storage.
I've tried downgrading all packages on the host to the previous versions (with ioprocess 1.2), but this does not seem to make any difference.
Downgrading should solve your issue, but without any logs we only guess.
Should I also downgrade the engine to 4.3.5 to get this to work again. I expected the downgrade of the host to be sufficient.
As an alternative I guess I could enable the emulate512 flag on VDO but I can not find how to do this on an existing VDO volume. Is this possible?
Please share more data so we can understand the failure: - complete vdsm log showing the failure to activate the domain - with 4.3.6 - with 4.3.5 (after you downgraded - contents of /rhev/data-center/mnt/_<domaindir>/domain-uuid/dom_md/metadata (assuming your local domain mount is /domaindir) - engine db dump Nir
Regards, Rik
On 9/26/19 4:58 PM, Sandro Bonazzola wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly 17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7 updates including:
-
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html> -
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
-
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html> -
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html> -
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html> -
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html> -
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html> -
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html> -
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html> -
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html> -
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html> -
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html> -
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html> -
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
-
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html> -
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html> -
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html> -
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
-
Ansible 2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8... -
Glusterfs 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/ -
QEMU KVM EV 2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/ [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPIYWV2OUNLNHU...

Hi Nir, Thank you for your time. On 9/27/19 4:27 PM, Nir Soffer wrote:
On Fri, Sep 27, 2019, 12:37 Rik Theys <Rik.Theys@esat.kuleuven.be <mailto:Rik.Theys@esat.kuleuven.be>> wrote:
Hi,
After upgrading to 4.3.6, my storage domain can no longer be activated, rendering my data center useless.
My storage domain is local storage on a filesystem backed by VDO/LVM. It seems 4.3.6 has added support for 4k storage. My VDO does not have the 'emulate512' flag set.
This configuration is not supported before 4.3.6. Various operations may fail when reading or writing to storage.
I was not aware of this when I set it up as I did not expect this to influence a setup where oVirt uses local storage (a file system location).
4.3.6 detects storageblock size, creates compatible storage domain metadata, and consider the block size when accessing storage.
I've tried downgrading all packages on the host to the previous versions (with ioprocess 1.2), but this does not seem to make any difference.
Downgrading should solve your issue, but without any logs we only guess.
I was able to work around my issue by downgrading to ioprocess 1.1 (and vdsm-4.30.24). Downgrading to only 1.2 did not solve my issue. With ioprocess downgraded to 1.1, I did not have to downgrade the engine (still on 4.3.6). I think I now have a better understanding what happened that triggered this. During a nightly yum-cron, the ioprocess and vdsm packages on the host were upgraded to 1.3 and vdsm 4.30.33. At this point, the engine log started to log: 2019-09-27 03:40:27,472+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Executing with domain map: {6bdf1a0d-274b-4195-8f f5-a5c002ea1a77=active} 2019-09-27 03:40:27,646+02 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Unexpected return value: Status [code=348, message=Block size does not match storage block size: 'block_size=512, storage_block_size=4096'] 2019-09-27 03:40:27,646+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] FINISH, ConnectStoragePoolVDSCommand, return: , log id: 483c7a17 I did not notice at first that this was a storage related issue and assumed it may get resolved by also upgrading the engine. So in the morning I upgraded the engine to 4.3.6 but this did not resolve my issue. I then found the above error in the engine log. In the release notes of 4.3.6 I read about the 4k support. I then downgraded ioprocess (and vdsm) to ioprocess 1.2 but that did also not solve my issue. This is when I contacted the list with my question. Afterwards I found in the ioprocess rpm changelog that (partial?) 4k support was also in 1.2. I kept on downgrading until I got ioprocess 1.1 (without 4k support) and at this point I could re-attach my storage domain. You mention above that 4.3.6 will detect the block size and configure the metadata on the storage domain? I've checked the dom_md/metadata file and it shows: ALIGNMENT=1048576 *BLOCK_SIZE=512* CLASS=Data DESCRIPTION=studvirt1-Local IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=1 POOL_DESCRIPTION=studvirt1-Local POOL_DOMAINS=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77:Active POOL_SPM_ID=-1 POOL_SPM_LVER=-1 POOL_UUID=085f02e8-c3b4-4cef-a35c-e357a86eec0c REMOTE_PATH=/data/images ROLE=Master SDUUID=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77 TYPE=LOCALFS VERSION=5 _SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66 I assume that at this point it works because ioprocess 1.1 does not report the block size to the engine (as it doesn't support this option?)? Can I update the storage domain metadata manually to report 4096 instead? I also noticed that the storage_domain_static table has the block_size stored. Should I update this field at the same time as I update the metadata file? If the engine log and database dump is still needed to better understand the issue, I will send it on Monday. Regards, Rik
Should I also downgrade the engine to 4.3.5 to get this to work again. I expected the downgrade of the host to be sufficient.
As an alternative I guess I could enable the emulate512 flag on VDO but I can not find how to do this on an existing VDO volume. Is this possible?
Please share more data so we can understand the failure:
- complete vdsm log showing the failure to activate the domain - with 4.3.6 - with 4.3.5 (after you downgraded - contents of /rhev/data-center/mnt/_<domaindir>/domain-uuid/dom_md/metadata (assuming your local domain mount is /domaindir) - engine db dump
Nir
Regards, Rik
On 9/26/19 4:58 PM, Sandro Bonazzola wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7updates including:
*
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html>
*
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
*
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html>
*
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html>
*
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html>
*
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html>
*
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html>
*
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html>
*
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html>
*
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html>
*
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html>
*
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html>
*
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html>
*
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
*
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html>
*
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html>
*
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html>
*
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
*
Ansible2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8...
*
Glusterfs6.5: https://docs.gluster.org/en/latest/release-notes/6.5/
*
QEMU KVM EV2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights:http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/
[2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
_______________________________________________ Users mailing list --users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email tousers-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement:https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/ List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPIYWV2OUNLNHU...
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>

On Sat, Sep 28, 2019 at 11:04 PM Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
Hi Nir,
Thank you for your time. On 9/27/19 4:27 PM, Nir Soffer wrote:
On Fri, Sep 27, 2019, 12:37 Rik Theys <Rik.Theys@esat.kuleuven.be> wrote:
Hi,
After upgrading to 4.3.6, my storage domain can no longer be activated, rendering my data center useless.
My storage domain is local storage on a filesystem backed by VDO/LVM. It seems 4.3.6 has added support for 4k storage. My VDO does not have the 'emulate512' flag set.
This configuration is not supported before 4.3.6. Various operations may fail when reading or writing to storage.
I was not aware of this when I set it up as I did not expect this to influence a setup where oVirt uses local storage (a file system location).
4.3.6 detects storage block size, creates compatible storage domain metadata, and consider the block size when accessing storage.
I've tried downgrading all packages on the host to the previous versions (with ioprocess 1.2), but this does not seem to make any difference.
Downgrading should solve your issue, but without any logs we only guess.
I was able to work around my issue by downgrading to ioprocess 1.1 (and vdsm-4.30.24). Downgrading to only 1.2 did not solve my issue. With ioprocess downgraded to 1.1, I did not have to downgrade the engine (still on 4.3.6).
ioprocess 1.1. is not recommended, you really want to use 1.3.0.
I think I now have a better understanding what happened that triggered this.
During a nightly yum-cron, the ioprocess and vdsm packages on the host were upgraded to 1.3 and vdsm 4.30.33. At this point, the engine log started to log:
2019-09-27 03:40:27,472+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Executing with domain map: {6bdf1a0d-274b-4195-8f f5-a5c002ea1a77=active} 2019-09-27 03:40:27,646+02 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Unexpected return value: Status [code=348, message=Block size does not match storage block size: 'block_size=512, storage_block_size=4096']
This means that when activating the storage domain, vdsm detected that the storage block size is 4k, but the domain metadata reports block size of 512. This combination may partly work for localfs domain since we don't use sanlock with local storage, and vdsm does not use direct I/O when writing to storage, and always use 4k block size when reading metadata from storage. Note that with older ovirt-imageio < 1.5.2, image uploads and downloads may fail when using 4k storage. in recent ovirt-imageio we detect and use the correct block size.
2019-09-27 03:40:27,646+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] FINISH, ConnectStoragePoolVDSCommand, return: , log id: 483c7a17
I did not notice at first that this was a storage related issue and assumed it may get resolved by also upgrading the engine. So in the morning I upgraded the engine to 4.3.6 but this did not resolve my issue.
I then found the above error in the engine log. In the release notes of 4.3.6 I read about the 4k support.
I then downgraded ioprocess (and vdsm) to ioprocess 1.2 but that did also not solve my issue. This is when I contacted the list with my question.
Afterwards I found in the ioprocess rpm changelog that (partial?) 4k support was also in 1.2. I kept on downgrading until I got ioprocess 1.1 (without 4k support) and at this point I could re-attach my storage domain.
You mention above that 4.3.6 will detect the block size and configure the metadata on the storage domain? I've checked the dom_md/metadata file and it shows:
ALIGNMENT=1048576 *BLOCK_SIZE=512* CLASS=Data DESCRIPTION=studvirt1-Local IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=1 POOL_DESCRIPTION=studvirt1-Local POOL_DOMAINS=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77:Active POOL_SPM_ID=-1 POOL_SPM_LVER=-1 POOL_UUID=085f02e8-c3b4-4cef-a35c-e357a86eec0c REMOTE_PATH=/data/images ROLE=Master SDUUID=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77 TYPE=LOCALFS VERSION=5 _SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66
So you have a v5 localfs storage domain - because we don't use leases, this domain should work with 4.3.6 if you modify this line in the domain metadata. BLOCK_SIZE=4096 To modify the line, you have to delete the checksum: _SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66
I assume that at this point it works because ioprocess 1.1 does not report the block size to the engine (as it doesn't support this option?)?
I think it works because ioprocess 1.1 has a bug when it does not use direct I/O when writing files. This full vdsm to believe you have block size of 512 bytes. Can I update the storage domain metadata manually to report 4096 instead?
I also noticed that the storage_domain_static table has the block_size stored. Should I update this field at the same time as I update the metadata file?
Yes, I think it should work.
If the engine log and database dump is still needed to better understand the issue, I will send it on Monday.
Engine reports the block size reported by vdsm. Once we get the system up with your 4k storage domain, we can check that engine reports the right value and update it if needed. I think what you should do is: 1. Backup storage domain metadata /path/to/domain/domain-uuid/dom_md 2. Deactivate the storage domain (from engine) 3. Edit the metadata file: - change BLOCK_SIZE to 4096 - delete the checksum line (_SHA_CKSUM=9dde06bbc9f2316efc141565738ff3 2037b1ff66) 4. Activate the domain With vdsm < 4.3.6, the domain should be active, ignoring the block size. 5. Upgrade back to 4.3.6 The system should detect the block size and work normally. 6. File ovirt bug for this issue We need at least to document the way to fix the storage domain manually. We also should consider checking storage domain metadata during upgrades. I think it will be a better experience if upgrade will fail and you have a working system with older version.
Should I also downgrade the engine to 4.3.5 to get this to work again. I
expected the downgrade of the host to be sufficient.
As an alternative I guess I could enable the emulate512 flag on VDO but I can not find how to do this on an existing VDO volume. Is this possible?
Please share more data so we can understand the failure:
- complete vdsm log showing the failure to activate the domain - with 4.3.6 - with 4.3.5 (after you downgraded - contents of /rhev/data-center/mnt/_<domaindir>/domain-uuid/dom_md/metadata (assuming your local domain mount is /domaindir) - engine db dump
Nir
Regards, Rik
On 9/26/19 4:58 PM, Sandro Bonazzola wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt 4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly 17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7 updates including:
-
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html> -
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
-
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html> -
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html> -
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html> -
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html> -
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html> -
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html> -
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html> -
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html> -
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html> -
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html> -
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html> -
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
-
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html> -
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html> -
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html> -
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
-
Ansible 2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8... -
Glusterfs 6.5: https://docs.gluster.org/en/latest/release-notes/6.5/ -
QEMU KVM EV 2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights: http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/ [2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPIYWV2OUNLNHU...
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>

Hi, On 9/29/19 12:54 AM, Nir Soffer wrote:
On Sat, Sep 28, 2019 at 11:04 PM Rik Theys <Rik.Theys@esat.kuleuven.be <mailto:Rik.Theys@esat.kuleuven.be>> wrote:
Hi Nir,
Thank you for your time.
On 9/27/19 4:27 PM, Nir Soffer wrote:
On Fri, Sep 27, 2019, 12:37 Rik Theys <Rik.Theys@esat.kuleuven.be <mailto:Rik.Theys@esat.kuleuven.be>> wrote:
Hi,
After upgrading to 4.3.6, my storage domain can no longer be activated, rendering my data center useless.
My storage domain is local storage on a filesystem backed by VDO/LVM. It seems 4.3.6 has added support for 4k storage. My VDO does not have the 'emulate512' flag set.
This configuration is not supported before 4.3.6. Various operations may fail when reading or writing to storage.
I was not aware of this when I set it up as I did not expect this to influence a setup where oVirt uses local storage (a file system location).
4.3.6 detects storageblock size, creates compatible storage domain metadata, and consider the block size when accessing storage.
I've tried downgrading all packages on the host to the previous versions (with ioprocess 1.2), but this does not seem to make any difference.
Downgrading should solve your issue, but without any logs we only guess.
I was able to work around my issue by downgrading to ioprocess 1.1 (and vdsm-4.30.24). Downgrading to only 1.2 did not solve my issue. With ioprocess downgraded to 1.1, I did not have to downgrade the engine (still on 4.3.6).
ioprocess 1.1. is not recommended, you really want to use 1.3.0.
I think I now have a better understanding what happened that triggered this.
During a nightly yum-cron, the ioprocess and vdsm packages on the host were upgraded to 1.3 and vdsm 4.30.33. At this point, the engine log started to log:
2019-09-27 03:40:27,472+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Executing with domain map: {6bdf1a0d-274b-4195-8f f5-a5c002ea1a77=active} 2019-09-27 03:40:27,646+02 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Unexpected return value: Status [code=348, message=Block size does not match storage block size: 'block_size=512, storage_block_size=4096']
This means that when activating the storage domain, vdsm detected that the storage block size is 4k, but the domain metadata reports block size of 512.
This combination may partly work for localfs domain since we don't use sanlock with local storage, and vdsm does not use direct I/O when writing to storage, and always use 4k block size when reading metadata from storage.
Note that with older ovirt-imageio < 1.5.2, image uploads and downloads may fail when using 4k storage. in recent ovirt-imageio we detect and use the correct block size.
2019-09-27 03:40:27,646+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] FINISH, ConnectStoragePoolVDSCommand, return: , log id: 483c7a17
I did not notice at first that this was a storage related issue and assumed it may get resolved by also upgrading the engine. So in the morning I upgraded the engine to 4.3.6 but this did not resolve my issue.
I then found the above error in the engine log. In the release notes of 4.3.6 I read about the 4k support.
I then downgraded ioprocess (and vdsm) to ioprocess 1.2 but that did also not solve my issue. This is when I contacted the list with my question.
Afterwards I found in the ioprocess rpm changelog that (partial?) 4k support was also in 1.2. I kept on downgrading until I got ioprocess 1.1 (without 4k support) and at this point I could re-attach my storage domain.
You mention above that 4.3.6 will detect the block size and configure the metadata on the storage domain? I've checked the dom_md/metadata file and it shows:
ALIGNMENT=1048576 *BLOCK_SIZE=512* CLASS=Data DESCRIPTION=studvirt1-Local IOOPTIMEOUTSEC=10 LEASERETRIES=3 LEASETIMESEC=60 LOCKPOLICY= LOCKRENEWALINTERVALSEC=5 MASTER_VERSION=1 POOL_DESCRIPTION=studvirt1-Local POOL_DOMAINS=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77:Active POOL_SPM_ID=-1 POOL_SPM_LVER=-1 POOL_UUID=085f02e8-c3b4-4cef-a35c-e357a86eec0c REMOTE_PATH=/data/images ROLE=Master SDUUID=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77 TYPE=LOCALFS VERSION=5 _SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66
So you have a v5 localfs storage domain - because we don't use leases, this domain should work with 4.3.6 if you modify this line in the domain metadata.
BLOCK_SIZE=4096
To modify the line, you have to delete the checksum:
_SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66
I assume that at this point it works because ioprocess 1.1 does not report the block size to the engine (as it doesn't support this option?)?
I think it works because ioprocess 1.1 has a bug when it does not use direct I/O when writing files. This full vdsm to believe you have block size of 512 bytes.
Can I update the storage domain metadata manually to report 4096 instead?
I also noticed that the storage_domain_static table has the block_size stored. Should I update this field at the same time as I update the metadata file?
Yes, I think it should work.
If the engine log and database dump is still needed to better understand the issue, I will send it on Monday.
Engine reports the block size reported by vdsm. Once we get the system up with your 4k storage domain, we can check that engine reports the right value and update it if needed. I think what you should do is:
1. Backup storage domain metadata /path/to/domain/domain-uuid/dom_md
2. Deactivate the storage domain (from engine)
3. Edit the metadata file: - change BLOCK_SIZE to 4096 - delete the checksum line (_SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66)
4. Activate the domain
With vdsm < 4.3.6, the domain should be active, ignoring the block size.
5. Upgrade back to 4.3.6
The system should detect the block size and work normally.
6. File ovirt bug for this issue
We need at least to document the way to fix the storage domain manually.
We also should consider checking storage domain metadata during upgrades. I think it will be a better experience if upgrade will fail and you have a working system with older version.
I've tried this procedure and it has worked! Thanks! If you would like me to file a bug, which component should I log it against? Regards, Rik
Should I also downgrade the engine to 4.3.5 to get this to work again. I expected the downgrade of the host to be sufficient.
As an alternative I guess I could enable the emulate512 flag on VDO but I can not find how to do this on an existing VDO volume. Is this possible?
Please share more data so we can understand the failure:
- complete vdsm log showing the failure to activate the domain - with 4.3.6 - with 4.3.5 (after you downgraded - contents of /rhev/data-center/mnt/_<domaindir>/domain-uuid/dom_md/metadata (assuming your local domain mount is /domaindir) - engine db dump
Nir
Regards, Rik
On 9/26/19 4:58 PM, Sandro Bonazzola wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.3.6 as of September 26th, 2019.
This update is the sixth in a series of stabilization updates to the 4.3 series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
Due to Fedora 28 being now at end of life this release is missing experimental tech preview for x86_64 and s390x architectures for Fedora 28.
We are working on Fedora 29 and 30 support and we may re-introduce experimental support for Fedora in next release.
See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
oVirt Node and Appliance have been updated including:
- oVirt4.3.6: http://www.ovirt.org/release/4.3.6/
- Wildfly17.0.1: https://wildfly.org/news/2019/07/07/WildFly-1701-Released/
- Latest CentOS 7.7updates including:
*
Release for CentOS Linux 7 (1908) on the x86_64 Architecture <https://lists.centos.org/pipermail/centos-announce/2019-September/023405.html>
*
CEBA-2019:2601 CentOS 7 NetworkManager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023423.html>
*
CEBA-2019:2023 CentOS 7 efivar BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023445.html>
*
CEBA-2019:2614 CentOS 7 firewalld BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023412.html>
*
CEBA-2019:2227 CentOS 7 grubby BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023441.html>
*
CESA-2019:2258 Moderate CentOS 7 http-parser Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023439.html>
*
CESA-2019:2600 Important CentOS 7 kernel Security Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023444.html>
*
CEBA-2019:2599 CentOS 7 krb5 BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023420.html>
*
CEBA-2019:2358 CentOS 7 libguestfs BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023421.html>
*
CEBA-2019:2679 CentOS 7 libvirt BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023422.html>
*
CEBA-2019:2501 CentOS 7 rsyslog BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023431.html>
*
CEBA-2019:2355 CentOS 7 selinux-policy BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023432.html>
*
CEBA-2019:2612 CentOS 7 sg3_utils BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023433.html>
*
CEBA-2019:2602 CentOS 7 sos BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023434.html>
*
CEBA-2019:2564 CentOS 7 subscription-manager BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023435.html>
*
CEBA-2019:2356 CentOS 7 systemd BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023436.html>
*
CEBA-2019:2605 CentOS 7 tuned BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023437.html>
*
CEBA-2019:2871 CentOS 7 tzdata BugFix Update <https://lists.centos.org/pipermail/centos-announce/2019-September/023450.html>
- latest CentOS Virt and Storage SIG updates:
*
Ansible2.8.5: https://github.com/ansible/ansible/blob/stable-2.8/changelogs/CHANGELOG-v2.8...
*
Glusterfs6.5: https://docs.gluster.org/en/latest/release-notes/6.5/
*
QEMU KVM EV2.12.0-33.1 : https://cbs.centos.org/koji/buildinfo?buildID=26484
Given the amount of security fixes provided by this release, upgrade is recommended as soon as practical.
Additional Resources:
* Read more about the oVirt 4.3.6 release highlights:http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.6/
[2] http://resources.ovirt.org/pub/ovirt-4.3/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*
_______________________________________________ Users mailing list --users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email tousers-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement:https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/ List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBW...
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>
_______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPIYWV2OUNLNHU...
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>
-- Rik Theys System Engineer KU Leuven - Dept. Elektrotechniek (ESAT) Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee +32(0)16/32.11.07 ---------------------------------------------------------------- <<Any errors in spelling, tact or fact are transmission errors>>

Hi,
Engine reports the block size reported by vdsm. Once we get the system up with your 4k storage domain, we can check that engine reports the right value and update it if needed. I think what you should do is:
1. Backup storage domain metadata /path/to/domain/domain-uuid/dom_md
2. Deactivate the storage domain (from engine)
3. Edit the metadata file: - change BLOCK_SIZE to 4096 - delete the checksum line (_SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66)
4. Activate the domain
With vdsm < 4.3.6, the domain should be active, ignoring the block size.
5. Upgrade back to 4.3.6
The system should detect the block size and work normally.
6. File ovirt bug for this issue
We need at least to document the way to fix the storage domain manually.
We also should consider checking storage domain metadata during upgrades. I think it will be a better experience if upgrade will fail and you have a working system with older version.
I've tried this procedure and it has worked! Thanks!
If you would like me to file a bug, which component should I log it against?
for vdms. Please include all the logs and db dump as Nir mentioned in previous email (see bellow). Thanks
Please share more data so we can understand the failure:
- complete vdsm log showing the failure to activate the domain - with 4.3.6 - with 4.3.5 (after you downgraded - contents of /rhev/data-center/mnt/_<domaindir>/domain-uuid/dom_md/metadata (assuming your local domain mount is /domaindir) - engine db dump
participants (13)
-
Derek Atkins
-
Gianluca Cecchi
-
Gobinda Das
-
Guillaume Pavese
-
jvdwege@xs4all.nl
-
Laura Wright
-
Nir Soffer
-
Rik Theys
-
Sandro Bonazzola
-
Satheesaran Sundaramoorthi
-
thomas@hoberg.net
-
Vojtech Juranek
-
Yedidyah Bar David