[ANN] oVirt 4.0.6 Release is now available

The oVirt Project is pleased to announce the general availability of oVirt 4.0.6, as of January 10th, 2016. This release is available now for: * Red Hat Enterprise Linux 7.3 or later * CentOS Linux (or similar) 7.3 or later * Fedora 23 (tech preview) This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.3 or later * CentOS Linux (or similar) 7.3 or later * Fedora 23 (tech preview) * oVirt Next Generation Node 4.0 This update is the sixth in a series of stabilization updates to the 4.0 series. 4.0.6 brings 4 enhancements and 59 bugfixes, including 24 high or urgent severity fixes, on top of oVirt 4.0 series See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: * A new oVirt Live ISO is already available. [2] * A new oVirt Next Generation Node will be available soon [2] * A new oVirt Engine Appliance is already available for Red Hat Enterprise Linux and CentOS Linux (or similar) * Mirrors[3] might need up to one day to synchronize. Additional Resources: * Read more about the oVirt 4.0.6 release highlights: http://www.ovirt.org/release/4.0.6/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4.0.6/ [2] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/ [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

Hi! I have a testing oVirt installation with only one host and running self hosted engine. If I put host in global maintenance mode: hosted-engine --set-maintenance --mode=global then ssh into self hosted engine VM, shoutdown VM using 'init 0', when I run on the host: hosted-engine --set-maintenance --mode=none I have: Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", line 73, in <module> if not maintenance.set_mode(sys.argv[1]): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", line 61, in set_mode value=m_global, File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 259, in set_maintenance_mode str(value)) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 202, in set_global_md_flag self._configure_broker_conn(broker) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 180, in _configure_broker_conn dom_type=dom_type) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 176, in set_storage_domain .format(sd_type, options, e)) ovirt_hosted_engine_ha.lib.exceptions.RequestError: Failed to set storage domain FilesystemBackend, options {'dom_type': 'nfs3', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}: Connection timed out This because the host has the iso domain nfs export mounted but the self hosted engine VM is not running, I think... So I have to reboot the host before change the maintenance mode.

Hi, the appliance we publish actually does not enable the ISO domain on the VM anymore for exactly this reason. I believe we mention this in the documentation somewhere too. Please do not put or disable all domains that are served from inside the engine VM. Best regards -- Martin Sivak SLA / oVirt On Tue, Jan 10, 2017 at 3:41 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi!
I have a testing oVirt installation with only one host and running self hosted engine.
If I put host in global maintenance mode:
hosted-engine --set-maintenance --mode=global
then ssh into self hosted engine VM, shoutdown VM using 'init 0', when I run on the host:
hosted-engine --set-maintenance --mode=none
I have:
Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", line 73, in <module> if not maintenance.set_mode(sys.argv[1]): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", line 61, in set_mode value=m_global, File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 259, in set_maintenance_mode str(value)) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 202, in set_global_md_flag self._configure_broker_conn(broker) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 180, in _configure_broker_conn dom_type=dom_type) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 176, in set_storage_domain .format(sd_type, options, e)) ovirt_hosted_engine_ha.lib.exceptions.RequestError: Failed to set storage domain FilesystemBackend, options {'dom_type': 'nfs3', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}: Connection timed out
This because the host has the iso domain nfs export mounted but the self hosted engine VM is not running, I think... So I have to reboot the host before change the maintenance mode.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi! You are right! But I think isn't correct that the command fail if the iso domain isn't available. I'ts ok if the comman fail when the self hosted engine storage is not available, but not for iso domain. Il 10/01/2017 17.47, Martin Sivak ha scritto:
Hi,
the appliance we publish actually does not enable the ISO domain on the VM anymore for exactly this reason. I believe we mention this in the documentation somewhere too.
Please do not put or disable all domains that are served from inside the engine VM.
Best regards
-- Martin Sivak SLA / oVirt
On Tue, Jan 10, 2017 at 3:41 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi!
I have a testing oVirt installation with only one host and running self hosted engine.
If I put host in global maintenance mode:
hosted-engine --set-maintenance --mode=global
then ssh into self hosted engine VM, shoutdown VM using 'init 0', when I run on the host:
hosted-engine --set-maintenance --mode=none
I have:
Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", line 73, in <module> if not maintenance.set_mode(sys.argv[1]): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", line 61, in set_mode value=m_global, File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 259, in set_maintenance_mode str(value)) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 202, in set_global_md_flag self._configure_broker_conn(broker) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 180, in _configure_broker_conn dom_type=dom_type) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 176, in set_storage_domain .format(sd_type, options, e)) ovirt_hosted_engine_ha.lib.exceptions.RequestError: Failed to set storage domain FilesystemBackend, options {'dom_type': 'nfs3', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}: Connection timed out
This because the host has the iso domain nfs export mounted but the self hosted engine VM is not running, I think... So I have to reboot the host before change the maintenance mode.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, we will focus on this exact issue (and couple of similar ones) in the next release, but the reason is actually pretty simple.. the ISO domain is mounted to the /rhev space and a when a simple traversal hits the unavailable NFS ISO domain, it gets stuck.. and the timeout is somewhat long. Martin On Tue, Jan 10, 2017 at 6:37 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi!
You are right! But I think isn't correct that the command fail if the iso domain isn't available. I'ts ok if the comman fail when the self hosted engine storage is not available, but not for iso domain.
Il 10/01/2017 17.47, Martin Sivak ha scritto:
Hi,
the appliance we publish actually does not enable the ISO domain on the VM anymore for exactly this reason. I believe we mention this in the documentation somewhere too.
Please do not put or disable all domains that are served from inside the engine VM.
Best regards
-- Martin Sivak SLA / oVirt
On Tue, Jan 10, 2017 at 3:41 PM, Stefano Danzi <s.danzi@hawai.it> wrote:
Hi!
I have a testing oVirt installation with only one host and running self hosted engine.
If I put host in global maintenance mode:
hosted-engine --set-maintenance --mode=global
then ssh into self hosted engine VM, shoutdown VM using 'init 0', when I run on the host:
hosted-engine --set-maintenance --mode=none
I have:
Traceback (most recent call last): File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", line 73, in <module> if not maintenance.set_mode(sys.argv[1]): File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/set_maintenance.py", line 61, in set_mode value=m_global, File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 259, in set_maintenance_mode str(value)) File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 202, in set_global_md_flag self._configure_broker_conn(broker) File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py", line 180, in _configure_broker_conn dom_type=dom_type) File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 176, in set_storage_domain .format(sd_type, options, e)) ovirt_hosted_engine_ha.lib.exceptions.RequestError: Failed to set storage domain FilesystemBackend, options {'dom_type': 'nfs3', 'sd_uuid': '46f55a31-f35f-465c-b3e2-df45c05e06a7'}: Connection timed out
This because the host has the iso domain nfs export mounted but the self hosted engine VM is not running, I think... So I have to reboot the host before change the maintenance mode.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--_000_6A17C71B52524C408E7AAF69103E9E490FA8D440fabamailserverf_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 SGksDQpSZWxlYXNlIE5vZGVzIHNob3dpbmcg4oCcQlogMTM3NDk4ODxodHRwczovL2J1Z3ppbGxh LnJlZGhhdC5jb20vMTM3NDk4OD4gTU9NIGNhdXNlcyBWZHNtIHRvIHNsb3cgZG93biwgaGlnaCBu dW1iZXIgb2YgJ3ZtR2V0SW9UdW5lUG9saWN5JyBBUEkgY2FsbHMg4oCTIHZkc20g4oCcDQpCdWd6 aWxsYSBzaG93cyDigJxUYXJnZXQgUmVsZWFzZTogICAgICAgICAgICAgICA0LjE4LjE24oCdDQoN Cnl1bSB1cGRhdGUgb3ZpcnQtbm9kZS1uZy1pbWFnZSBnYXZlIG1lIG92aXJ0LW5vZGUtbmctaW1h Z2UtNC4wLjYtMS5lbDcNCndoaWNoIGluc3RhbGxzIHZkc20tNC4xOC4xNS4zLTENCg0KSXMgaXQg Zml4ZWQgb3Igbm90Pw0KDQoNClRoeCBDaHJpc3RpYW4NCg0KDQoNCg0KVm9uOiB1c2Vycy1ib3Vu Y2VzQG92aXJ0Lm9yZyBbbWFpbHRvOnVzZXJzLWJvdW5jZXNAb3ZpcnQub3JnXSBJbSBBdWZ0cmFn IHZvbiBTYW5kcm8gQm9uYXp6b2xhDQpHZXNlbmRldDogRGllbnN0YWcsIDEwLiBKw6RubmVyIDIw MTcgMTM6MzUNCkFuOiBhbm5vdW5jZUBvdmlydC5vcmc7IHVzZXJzIDx1c2Vyc0BvdmlydC5vcmc+ DQpCZXRyZWZmOiBbb3ZpcnQtdXNlcnNdIFtBTk5dIG9WaXJ0IDQuMC42IFJlbGVhc2UgaXMgbm93 IGF2YWlsYWJsZQ0KDQpUaGUgb1ZpcnQgUHJvamVjdCBpcyBwbGVhc2VkIHRvIGFubm91bmNlIHRo ZSBnZW5lcmFsIGF2YWlsYWJpbGl0eSBvZiBvVmlydCA0LjAuNiwgYXMgb2YgSmFudWFyeSAxMHRo LCAyMDE2Lg0KDQpUaGlzIHJlbGVhc2UgaXMgYXZhaWxhYmxlIG5vdyBmb3I6DQoqIFJlZCBIYXQg RW50ZXJwcmlzZSBMaW51eCA3LjMgb3IgbGF0ZXINCiogQ2VudE9TIExpbnV4IChvciBzaW1pbGFy KSA3LjMgb3IgbGF0ZXINCiogRmVkb3JhIDIzICh0ZWNoIHByZXZpZXcpDQoNClRoaXMgcmVsZWFz ZSBzdXBwb3J0cyBIeXBlcnZpc29yIEhvc3RzIHJ1bm5pbmc6DQoqIFJlZCBIYXQgRW50ZXJwcmlz ZSBMaW51eCA3LjMgb3IgbGF0ZXINCiogQ2VudE9TIExpbnV4IChvciBzaW1pbGFyKSA3LjMgb3Ig bGF0ZXINCiogRmVkb3JhIDIzICh0ZWNoIHByZXZpZXcpDQoqIG9WaXJ0IE5leHQgR2VuZXJhdGlv biBOb2RlIDQuMA0KDQoNClRoaXMgdXBkYXRlIGlzIHRoZSBzaXh0aCBpbiBhIHNlcmllcyBvZiBz dGFiaWxpemF0aW9uIHVwZGF0ZXMgdG8gdGhlIDQuMCBzZXJpZXMuDQo0LjAuNiBicmluZ3MgNCBl bmhhbmNlbWVudHMgYW5kIDU5IGJ1Z2ZpeGVzLCBpbmNsdWRpbmcgMjQgaGlnaCBvciB1cmdlbnQN CnNldmVyaXR5IGZpeGVzLCBvbiB0b3Agb2Ygb1ZpcnQgNC4wIHNlcmllcw0KU2VlIHRoZSByZWxl YXNlIG5vdGVzIFsxXSBmb3IgaW5zdGFsbGF0aW9uIC8gdXBncmFkZSBpbnN0cnVjdGlvbnMgYW5k IGENCmxpc3Qgb2YgbmV3IGZlYXR1cmVzIGFuZCBidWdzIGZpeGVkLg0KDQpOb3RlczoNCiogQSBu ZXcgb1ZpcnQgTGl2ZSBJU08gaXMgYWxyZWFkeSBhdmFpbGFibGUuIFsyXQ0KKiBBIG5ldyBvVmly dCBOZXh0IEdlbmVyYXRpb24gTm9kZSB3aWxsIGJlIGF2YWlsYWJsZSBzb29uIFsyXQ0KKiBBIG5l dyBvVmlydCBFbmdpbmUgQXBwbGlhbmNlIGlzIGFscmVhZHkgYXZhaWxhYmxlIGZvciBSZWQgSGF0 IEVudGVycHJpc2UgTGludXggYW5kIENlbnRPUyBMaW51eCAob3Igc2ltaWxhcikNCiogTWlycm9y c1szXSBtaWdodCBuZWVkIHVwIHRvIG9uZSBkYXkgdG8gc3luY2hyb25pemUuDQoNCkFkZGl0aW9u YWwgUmVzb3VyY2VzOg0KKiBSZWFkIG1vcmUgYWJvdXQgdGhlIG9WaXJ0IDQuMC42IHJlbGVhc2Ug aGlnaGxpZ2h0czpodHRwOi8vd3d3Lm92aXJ0Lm9yZy9yZWxlYXNlLzQuMC42Lw0KKiBHZXQgbW9y ZSBvVmlydCBQcm9qZWN0IHVwZGF0ZXMgb24gVHdpdHRlcjogaHR0cHM6Ly90d2l0dGVyLmNvbS9v dmlydA0KKiBDaGVjayBvdXQgdGhlIGxhdGVzdCBwcm9qZWN0IG5ld3Mgb24gdGhlIG9WaXJ0IGJs b2c6aHR0cDovL3d3dy5vdmlydC5vcmcvYmxvZy8NCg0KWzFdIGh0dHA6Ly93d3cub3ZpcnQub3Jn L3JlbGVhc2UvNC4wLjYvDQpbMl0gaHR0cDovL3Jlc291cmNlcy5vdmlydC5vcmcvcHViL292aXJ0 LTQuMC1wcmUvaXNvLw0KWzNdIGh0dHA6Ly93d3cub3ZpcnQub3JnL1JlcG9zaXRvcnlfbWlycm9y cyNDdXJyZW50X21pcnJvcnMNCg0KLS0NClNhbmRybyBCb25henpvbGENCkJldHRlciB0ZWNobm9s b2d5LiBGYXN0ZXIgaW5ub3ZhdGlvbi4gUG93ZXJlZCBieSBjb21tdW5pdHkgY29sbGFib3JhdGlv bi4NClNlZSBob3cgaXQgd29ya3MgYXQgcmVkaGF0LmNvbTxodHRwOi8vcmVkaGF0LmNvbT4NCg0K --_000_6A17C71B52524C408E7AAF69103E9E490FA8D440fabamailserverf_ Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIg Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxtZXRhIG5hbWU9IkdlbmVyYXRv ciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUgKGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxl PjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6 IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1m YWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJcGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAy IDQ7fQ0KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8NCnAuTXNvTm9ybWFsLCBsaS5Nc29Ob3JtYWws IGRpdi5Nc29Ob3JtYWwNCgl7bWFyZ2luOjBjbTsNCgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7DQoJ Zm9udC1zaXplOjEyLjBwdDsNCglmb250LWZhbWlseToiVGltZXMgTmV3IFJvbWFuIixzZXJpZjt9 DQphOmxpbmssIHNwYW4uTXNvSHlwZXJsaW5rDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCglj b2xvcjpibHVlOw0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQ0KYTp2aXNpdGVkLCBzcGFu Lk1zb0h5cGVybGlua0ZvbGxvd2VkDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjpw dXJwbGU7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQpwLm1zb25vcm1hbDAsIGxpLm1z b25vcm1hbDAsIGRpdi5tc29ub3JtYWwwDQoJe21zby1zdHlsZS1uYW1lOm1zb25vcm1hbDsNCglt c28tbWFyZ2luLXRvcC1hbHQ6YXV0bzsNCgltYXJnaW4tcmlnaHQ6MGNtOw0KCW1zby1tYXJnaW4t Ym90dG9tLWFsdDphdXRvOw0KCW1hcmdpbi1sZWZ0OjBjbTsNCglmb250LXNpemU6MTIuMHB0Ow0K CWZvbnQtZmFtaWx5OiJUaW1lcyBOZXcgUm9tYW4iLHNlcmlmO30NCnNwYW4uRS1NYWlsRm9ybWF0 dm9ybGFnZTE4DQoJe21zby1zdHlsZS10eXBlOnBlcnNvbmFsOw0KCWZvbnQtZmFtaWx5OiJDYWxp YnJpIixzYW5zLXNlcmlmOw0KCWNvbG9yOndpbmRvd3RleHQ7fQ0Kc3Bhbi5FLU1haWxGb3JtYXR2 b3JsYWdlMTkNCgl7bXNvLXN0eWxlLXR5cGU6cGVyc29uYWwtY29tcG9zZTsNCglmb250LWZhbWls eToiQ2FsaWJyaSIsc2Fucy1zZXJpZjsNCgljb2xvcjp3aW5kb3d0ZXh0O30NCi5Nc29DaHBEZWZh dWx0DQoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5Ow0KCW1zby1mYXJlYXN0LWxhbmd1YWdl OkVOLVVTO30NCkBwYWdlIFdvcmRTZWN0aW9uMQ0KCXtzaXplOjYxMi4wcHQgNzkyLjBwdDsNCglt YXJnaW46NzAuODVwdCA3MC44NXB0IDIuMGNtIDcwLjg1cHQ7fQ0KZGl2LldvcmRTZWN0aW9uMQ0K CXtwYWdlOldvcmRTZWN0aW9uMTt9DQotLT48L3N0eWxlPjwhLS1baWYgZ3RlIG1zbyA5XT48eG1s Pg0KPG86c2hhcGVkZWZhdWx0cyB2OmV4dD0iZWRpdCIgc3BpZG1heD0iMTAyNiIgLz4NCjwveG1s PjwhW2VuZGlmXS0tPjwhLS1baWYgZ3RlIG1zbyA5XT48eG1sPg0KPG86c2hhcGVsYXlvdXQgdjpl eHQ9ImVkaXQiPg0KPG86aWRtYXAgdjpleHQ9ImVkaXQiIGRhdGE9IjEiIC8+DQo8L286c2hhcGVs YXlvdXQ+PC94bWw+PCFbZW5kaWZdLS0+DQo8L2hlYWQ+DQo8Ym9keSBsYW5nPSJERS1BVCIgbGlu az0iYmx1ZSIgdmxpbms9InB1cnBsZSI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEiPg0KPHAg Y2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMiIHN0eWxlPSJmb250LXNpemU6MTEu MHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZjttc28tZmFyZWFz dC1sYW5ndWFnZTpFTi1VUyI+SGksPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1z b05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMiIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQt ZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZjttc28tZmFyZWFzdC1sYW5ndWFn ZTpFTi1VUyI+UmVsZWFzZSBOb2RlcyBzaG93aW5nIOKAnDxhIGhyZWY9Imh0dHBzOi8vYnVnemls bGEucmVkaGF0LmNvbS8xMzc0OTg4Ij48c3BhbiBzdHlsZT0iY29sb3I6d2luZG93dGV4dDt0ZXh0 LWRlY29yYXRpb246bm9uZSI+QlogMTM3NDk4ODwvc3Bhbj48L2E+DQogTU9NIGNhdXNlcyBWZHNt IHRvIHNsb3cgZG93biwgaGlnaCBudW1iZXIgb2YgJ3ZtR2V0SW9UdW5lUG9saWN5JyBBUEkgY2Fs bHMg4oCTIHZkc20g4oCcPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1h bCI+PHNwYW4gbGFuZz0iRU4tVVMiIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5 OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZjttc28tZmFyZWFzdC1sYW5ndWFnZTpFTi1V UyI+QnVnemlsbGEgc2hvd3Mg4oCcVGFyZ2V0IFJlbGVhc2U6ICZuYnNwOyZuYnNwOyZuYnNwOyZu YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw OyA0LjE4LjE24oCdPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+ PHNwYW4gbGFuZz0iRU4tVVMiIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZx dW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZjttc28tZmFyZWFzdC1sYW5ndWFnZTpFTi1VUyI+ PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4g bGFuZz0iRU4tVVMiIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0Nh bGlicmkmcXVvdDssc2Fucy1zZXJpZjttc28tZmFyZWFzdC1sYW5ndWFnZTpFTi1VUyI+eXVtIHVw ZGF0ZSBvdmlydC1ub2RlLW5nLWltYWdlIGdhdmUgbWUgb3ZpcnQtbm9kZS1uZy1pbWFnZS00LjAu Ni0xLmVsNzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFu IGxhbmc9IkVOLVVTIiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtD YWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWY7bXNvLWZhcmVhc3QtbGFuZ3VhZ2U6RU4tVVMiPndoaWNo IGluc3RhbGxzIHZkc20tNC4xOC4xNS4zLTENCjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNs YXNzPSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9IkVOLVVTIiBzdHlsZT0iZm9udC1zaXplOjExLjBw dDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWY7bXNvLWZhcmVhc3Qt bGFuZ3VhZ2U6RU4tVVMiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN c29Ob3JtYWwiPjxzcGFuIGxhbmc9IkVOLVVTIiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250 LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWY7bXNvLWZhcmVhc3QtbGFuZ3Vh Z2U6RU4tVVMiPklzIGl0IGZpeGVkIG9yIG5vdD88bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBj bGFzcz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyIgc3R5bGU9ImZvbnQtc2l6ZToxMS4w cHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmO21zby1mYXJlYXN0 LWxhbmd1YWdlOkVOLVVTIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0i TXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyIgc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9u dC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmO21zby1mYXJlYXN0LWxhbmd1 YWdlOkVOLVVTIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9y bWFsIj48c3BhbiBsYW5nPSJFTi1VUyIgc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1p bHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmO21zby1mYXJlYXN0LWxhbmd1YWdlOkVO LVVTIj5UaHggQ2hyaXN0aWFuPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05v cm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMiIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFt aWx5OiZxdW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZjttc28tZmFyZWFzdC1sYW5ndWFnZTpF Ti1VUyI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+ PGI+PHNwYW4gbGFuZz0iRU4tVVMiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvYj48L3A+DQo8 cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyIgc3R5bGU9ImZvbnQtc2l6ZTox MS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmO21zby1mYXJl YXN0LWxhbmd1YWdlOkVOLVVTIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFz cz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyIgc3R5bGU9Im1zby1mYXJlYXN0LWxhbmd1 YWdlOkVOLVVTIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9y bWFsIj48Yj48c3BhbiBsYW5nPSJERSIgc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1p bHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmIj5Wb246PC9zcGFuPjwvYj48c3BhbiBs YW5nPSJERSIgc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJy aSZxdW90OyxzYW5zLXNlcmlmIj4gdXNlcnMtYm91bmNlc0BvdmlydC5vcmcgW21haWx0bzp1c2Vy cy1ib3VuY2VzQG92aXJ0Lm9yZ10NCjxiPkltIEF1ZnRyYWcgdm9uIDwvYj5TYW5kcm8gQm9uYXp6 b2xhPGJyPg0KPGI+R2VzZW5kZXQ6PC9iPiBEaWVuc3RhZywgMTAuIErDpG5uZXIgMjAxNyAxMzoz NTxicj4NCjxiPkFuOjwvYj4gYW5ub3VuY2VAb3ZpcnQub3JnOyB1c2VycyAmbHQ7dXNlcnNAb3Zp cnQub3JnJmd0Ozxicj4NCjxiPkJldHJlZmY6PC9iPiBbb3ZpcnQtdXNlcnNdIFtBTk5dIG9WaXJ0 IDQuMC42IFJlbGVhc2UgaXMgbm93IGF2YWlsYWJsZTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxw IGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPGRpdj4NCjxkaXY+DQo8 cCBjbGFzcz0iTXNvTm9ybWFsIj5UaGUgb1ZpcnQgUHJvamVjdCBpcyBwbGVhc2VkIHRvIGFubm91 bmNlIHRoZSBnZW5lcmFsIGF2YWlsYWJpbGl0eSBvZiBvVmlydCA0LjAuNiwgYXMgb2YgSmFudWFy eSAxMHRoLCAyMDE2LjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1z b05vcm1hbCI+Jm5ic3A7PG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0i TXNvTm9ybWFsIj5UaGlzIHJlbGVhc2UgaXMgYXZhaWxhYmxlIG5vdyBmb3I6PG86cD48L286cD48 L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4qIFJlZCBIYXQgRW50ZXJw cmlzZSBMaW51eCA3LjMgb3IgbGF0ZXI8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxw IGNsYXNzPSJNc29Ob3JtYWwiPiogQ2VudE9TIExpbnV4IChvciBzaW1pbGFyKSA3LjMgb3IgbGF0 ZXI8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPiog RmVkb3JhIDIzICh0ZWNoIHByZXZpZXcpPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8 cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4N CjxwIGNsYXNzPSJNc29Ob3JtYWwiPlRoaXMgcmVsZWFzZSBzdXBwb3J0cyBIeXBlcnZpc29yIEhv c3RzIHJ1bm5pbmc6PG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNv Tm9ybWFsIj4qIFJlZCBIYXQgRW50ZXJwcmlzZSBMaW51eCA3LjMgb3IgbGF0ZXI8bzpwPjwvbzpw PjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPiogQ2VudE9TIExpbnV4 IChvciBzaW1pbGFyKSA3LjMgb3IgbGF0ZXI8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4N CjxwIGNsYXNzPSJNc29Ob3JtYWwiPiogRmVkb3JhIDIzICh0ZWNoIHByZXZpZXcpPG86cD48L286 cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4qIG9WaXJ0IE5leHQg R2VuZXJhdGlvbiBOb2RlIDQuMDxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xh c3M9Ik1zb05vcm1hbCI+Jm5ic3A7PG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBj bGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxw IGNsYXNzPSJNc29Ob3JtYWwiPlRoaXMgdXBkYXRlIGlzIHRoZSBzaXh0aCBpbiBhIHNlcmllcyBv ZiBzdGFiaWxpemF0aW9uIHVwZGF0ZXMgdG8gdGhlIDQuMCBzZXJpZXMuPG86cD48L286cD48L3A+ DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj40LjAuNiBicmluZ3MgNCBlbmhh bmNlbWVudHMgYW5kIDU5IGJ1Z2ZpeGVzLCBpbmNsdWRpbmcgMjQgaGlnaCBvciB1cmdlbnQ8bzpw PjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPnNldmVyaXR5 IGZpeGVzLCBvbiB0b3Agb2Ygb1ZpcnQgNC4wIHNlcmllczxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+ DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+U2VlIHRoZSByZWxlYXNlIG5vdGVzIFsxXSBm b3IgaW5zdGFsbGF0aW9uIC8gdXBncmFkZSBpbnN0cnVjdGlvbnMgYW5kIGE8bzpwPjwvbzpwPjwv cD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPmxpc3Qgb2YgbmV3IGZlYXR1 cmVzIGFuZCBidWdzIGZpeGVkLjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xh c3M9Ik1zb05vcm1hbCI+Jm5ic3A7PG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBj bGFzcz0iTXNvTm9ybWFsIj5Ob3Rlczo8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxw IGNsYXNzPSJNc29Ob3JtYWwiPiogQSBuZXcgb1ZpcnQgTGl2ZSBJU08gaXMgYWxyZWFkeSBhdmFp bGFibGUuIFsyXTxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05v cm1hbCI+KiBBIG5ldyBvVmlydCBOZXh0IEdlbmVyYXRpb24gTm9kZSB3aWxsIGJlIGF2YWlsYWJs ZSBzb29uIFsyXTxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05v cm1hbCI+KiBBIG5ldyBvVmlydCBFbmdpbmUgQXBwbGlhbmNlIGlzIGFscmVhZHkgYXZhaWxhYmxl IGZvciBSZWQgSGF0IEVudGVycHJpc2UgTGludXggYW5kIENlbnRPUyBMaW51eCAob3Igc2ltaWxh cik8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPiog TWlycm9yc1szXSBtaWdodCBuZWVkIHVwIHRvIG9uZSBkYXkgdG8gc3luY2hyb25pemUuPG86cD48 L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNw OzwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPkFkZGl0aW9u YWwgUmVzb3VyY2VzOjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1z b05vcm1hbCI+KiBSZWFkIG1vcmUgYWJvdXQgdGhlIG9WaXJ0IDQuMC42IHJlbGVhc2UgaGlnaGxp Z2h0czo8YSBocmVmPSJodHRwOi8vd3d3Lm92aXJ0Lm9yZy9yZWxlYXNlLzQuMC42LyI+aHR0cDov L3d3dy5vdmlydC5vcmcvcmVsZWFzZS80LjAuNi88L2E+PG86cD48L286cD48L3A+DQo8L2Rpdj4N CjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4qIEdldCBtb3JlIG9WaXJ0IFByb2plY3QgdXBk YXRlcyBvbiBUd2l0dGVyOiA8YSBocmVmPSJodHRwczovL3R3aXR0ZXIuY29tL292aXJ0Ij4NCmh0 dHBzOi8vdHdpdHRlci5jb20vb3ZpcnQ8L2E+PG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+ DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4qIENoZWNrIG91dCB0aGUgbGF0ZXN0IHByb2plY3QgbmV3 cyBvbiB0aGUgb1ZpcnQgYmxvZzo8YSBocmVmPSJodHRwOi8vd3d3Lm92aXJ0Lm9yZy9ibG9nLyI+ aHR0cDovL3d3dy5vdmlydC5vcmcvYmxvZy88L2E+PG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxk aXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4mbmJzcDs8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0K PGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPlsxXSA8YSBocmVmPSJodHRwOi8vd3d3Lm92aXJ0 Lm9yZy9yZWxlYXNlLzQuMC42LyI+aHR0cDovL3d3dy5vdmlydC5vcmcvcmVsZWFzZS80LjAuNi88 L2E+PG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5b Ml0gPGEgaHJlZj0iaHR0cDovL3Jlc291cmNlcy5vdmlydC5vcmcvcHViL292aXJ0LTQuMC1wcmUv aXNvLyI+DQpodHRwOi8vcmVzb3VyY2VzLm92aXJ0Lm9yZy9wdWIvb3ZpcnQtNC4wLXByZS9pc28v PC9hPjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+ WzNdIDxhIGhyZWY9Imh0dHA6Ly93d3cub3ZpcnQub3JnL1JlcG9zaXRvcnlfbWlycm9ycyNDdXJy ZW50X21pcnJvcnMiPg0KaHR0cDovL3d3dy5vdmlydC5vcmcvUmVwb3NpdG9yeV9taXJyb3JzI0N1 cnJlbnRfbWlycm9yczwvYT48bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNz PSJNc29Ob3JtYWwiPiZuYnNwOzxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xh c3M9Ik1zb05vcm1hbCI+LS0mbmJzcDs8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxw IGNsYXNzPSJNc29Ob3JtYWwiPlNhbmRybyBCb25henpvbGE8bzpwPjwvbzpwPjwvcD4NCjwvZGl2 Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPkJldHRlciB0ZWNobm9sb2d5LiBGYXN0ZXIg aW5ub3ZhdGlvbi4gUG93ZXJlZCBieSBjb21tdW5pdHkgY29sbGFib3JhdGlvbi48bzpwPjwvbzpw PjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPlNlZSBob3cgaXQgd29y a3MgYXQgPGEgaHJlZj0iaHR0cDovL3JlZGhhdC5jb20iPnJlZGhhdC5jb208L2E+PG86cD48L286 cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwv bzpwPjwvcD4NCjwvZGl2Pg0KPC9kaXY+DQo8L2Rpdj4NCjwvYm9keT4NCjwvaHRtbD4NCg== --_000_6A17C71B52524C408E7AAF69103E9E490FA8D440fabamailserverf_--

On Wed, Jan 11, 2017 at 9:30 AM, Grundmann, Christian < Christian.Grundmann@fabasoft.com> wrote:
Hi,
Release Nodes showing “BZ 1374988 <https://bugzilla.redhat.com/1374988> MOM causes Vdsm to slow down, high number of 'vmGetIoTunePolicy' API calls – vdsm “
Bugzilla shows “Target Release: 4.18.16”
yum update ovirt-node-ng-image gave me ovirt-node-ng-image-4.0.6-1.el7
which installs vdsm-4.18.15.3-1
node-ng should have vdsm-4.18.21-1.el7.centos.noarch; I fear that the build job took outdated mirrors to make the compose. I'm going to re-trigger the build and re-publish it today. In the meanwhile, yum update should let you update vdsm within the node host. I'm sorry for the issue.
Is it fixed or not?
Thx Christian
*Von:* users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] *Im Auftrag von *Sandro Bonazzola *Gesendet:* Dienstag, 10. Jänner 2017 13:35 *An:* announce@ovirt.org; users <users@ovirt.org> *Betreff:* [ovirt-users] [ANN] oVirt 4.0.6 Release is now available
The oVirt Project is pleased to announce the general availability of oVirt 4.0.6, as of January 10th, 2016.
This release is available now for:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* Fedora 23 (tech preview)
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* Fedora 23 (tech preview)
* oVirt Next Generation Node 4.0
This update is the sixth in a series of stabilization updates to the 4.0 series.
4.0.6 brings 4 enhancements and 59 bugfixes, including 24 high or urgent
severity fixes, on top of oVirt 4.0 series
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
* A new oVirt Live ISO is already available. [2]
* A new oVirt Next Generation Node will be available soon [2]
* A new oVirt Engine Appliance is already available for Red Hat Enterprise Linux and CentOS Linux (or similar)
* Mirrors[3] might need up to one day to synchronize.
Additional Resources:
* Read more about the oVirt 4.0.6 release highlights:http://www.ovirt. org/release/4.0.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.0.6/
[2] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

I'm getting yum errors when I try to update the ovirt-release40 package on our engine.
ovirt-release40-4.0.6-1.noarch FAILED http://mirror.isoc.org.il/pub/ovirt/ovirt-4.0/rpm/el7/noarch/ovirt-release40...: [Errno -1] Package does not match intended download. Suggestion: run yum --enablerepo=ovirt-4.0 clean metadata
Running yum clean does not have any effect. Is there an issue with the mirrors? On Tue, 2017-01-10 at 13:34 +0100, Sandro Bonazzola wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.0.6, as of January 10th, 2016. This release is available now for: * Red Hat Enterprise Linux 7.3 or later * CentOS Linux (or similar) 7.3 or later * Fedora 23 (tech preview) This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.3 or later * CentOS Linux (or similar) 7.3 or later * Fedora 23 (tech preview) * oVirt Next Generation Node 4.0
This update is the sixth in a series of stabilization updates to the 4.0 series. 4.0.6 brings 4 enhancements and 59 bugfixes, including 24 high or urgent severity fixes, on top of oVirt 4.0 series See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: * A new oVirt Live ISO is already available. [2] * A new oVirt Next Generation Node will be available soon [2] * A new oVirt Engine Appliance is already available for Red Hat Enterprise Linux and CentOS Linux (or similar) * Mirrors[3] might need up to one day to synchronize.
Additional Resources: * Read more about the oVirt 4.0.6 release highlights:http://www.ovirt .org/release/4.0.6/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovir t * Check out the latest project news on the oVirt blog:http://www.ovir t.org/blog/ [1] http://www.ovirt.org/release/4.0.6/ [2] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/ [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On Wed, Jan 11, 2017 at 3:26 PM, Michael Watters <Michael.Watters@dart.biz> wrote:
I'm getting yum errors when I try to update the ovirt-release40 package on our engine.
ovirt-release40-4.0.6-1.noarch FAILED http://mirror.isoc.org.il/pub/ovirt/ovirt-4.0/rpm/el7/ noarch/ovirt-release40-4.0.6-1.noarch.rpm: [Errno -1] Package does not match intended download. Suggestion: run yum --enablerepo=ovirt-4.0 clean metadata
Running yum clean does not have any effect. Is there an issue with the mirrors?
Mirrors are currently outdated, I guess they need more time to sync. In the meanwhile you can uncomment the baseurl line in the .repo file and comment the mirrorlist line to be sure you get latest repo content.
On Tue, 2017-01-10 at 13:34 +0100, Sandro Bonazzola wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.0.6, as of January 10th, 2016.
This release is available now for: * Red Hat Enterprise Linux 7.3 or later * CentOS Linux (or similar) 7.3 or later * Fedora 23 (tech preview)
This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.3 or later * CentOS Linux (or similar) 7.3 or later * Fedora 23 (tech preview) * oVirt Next Generation Node 4.0
This update is the sixth in a series of stabilization updates to the 4.0 series. 4.0.6 brings 4 enhancements and 59 bugfixes, including 24 high or urgent severity fixes, on top of oVirt 4.0 series See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: * A new oVirt Live ISO is already available. [2] * A new oVirt Next Generation Node will be available soon [2] * A new oVirt Engine Appliance is already available for Red Hat Enterprise Linux and CentOS Linux (or similar) * Mirrors[3] might need up to one day to synchronize.
Additional Resources: * Read more about the oVirt 4.0.6 release highlights:http://www.ovirt .org/release/4.0.6/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovir t * Check out the latest project news on the oVirt blog:http://www.ovir t.org/blog/
[1] http://www.ovirt.org/release/4.0.6/ [2] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/ [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On Tue, Jan 10, 2017 at 1:34 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt Project is pleased to announce the general availability of oVirt 4.0.6, as of January 10th, 2016.
This release is available now for: * Red Hat Enterprise Linux 7.3 or later * CentOS Linux (or similar) 7.3 or later * Fedora 23 (tech preview)
This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.3 or later * CentOS Linux (or similar) 7.3 or later * Fedora 23 (tech preview) * oVirt Next Generation Node 4.0
This update is the sixth in a series of stabilization updates to the 4.0 series. 4.0.6 brings 4 enhancements and 59 bugfixes, including 24 high or urgent severity fixes, on top of oVirt 4.0 series See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed.
Notes: * A new oVirt Live ISO is already available. [2] * A new oVirt Next Generation Node will be available soon [2]
A new oVirt Next Generation Node is now available[2]. If you update oVirt Next Generation Node you'll receive 4.0.6.1 image build since we had to respin it due to a problem in the build we delivered yesterday (wrong vdsm version included within the image).
* A new oVirt Engine Appliance is already available for Red Hat Enterprise Linux and CentOS Linux (or similar) * Mirrors[3] might need up to one day to synchronize.
Additional Resources: * Read more about the oVirt 4.0.6 release highlights:http://www.ovirt. org/release/4.0.6/ * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.0.6/ [2] http://resources.ovirt.org/pub/ovirt-4.0-pre/iso/ [3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

Hi, Just FYI I upgraded from EL7.2/ovirt 4.0.5 to EL7.3/ovirt 4.0.6 and when I ran engine-setup I ran into: https://bugzilla.redhat.com/show_bug.cgi?id=1293844 Specifically, engine-setup complained that dwhd was still running, but systemctl status showed it was not running. I was able to finally get around this by manually stopping/starting ovirt-engine-dwhd until it actually came up again and then engine-setup ran fine. But what would cause it to get into this behavior in the first place? I SUSPECT part of the issue was that I upgraded EL7.2 to EL7.3 at the same time, which probably updated (and restarted) PG, and I didn't wait the requisite hour until cron restarted it on it's own? FWIW, I'm running on a single-host system with hosted-engine. Thanks, -derek Sandro Bonazzola <sbonazzo@redhat.com> writes:
The oVirt Project is pleased to announce the general availability of oVirt 4.0.6, as of January 10th, 2016. This release is available now for: * Red Hat Enterprise Linux 7.3 or later * CentOS Linux (or similar) 7.3 or later * Fedora 23 (tech preview) This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.3 or later * CentOS Linux (or similar) 7.3 or later * Fedora 23 (tech preview) * oVirt Next Generation Node 4.0
-- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

On Sun, Jan 15, 2017 at 3:39 PM, Derek Atkins <derek@ihtfp.com> wrote:
FWIW, I'm running on a single-host system with hosted-engine.
I made the same update on two single host environments with self hosted engine without any problem. My approach was; - shutdown all VMs except self hosted engine - put environment in global maintenance - update the self hosted engine environment (with commands: yum update "ovirt-*-setup*" engine-setup ) - verify connection to engine web admin gui is still ok and 4.0.6. Engine os at this time is still 7.2 - shutdown engine VM - put hypervisor host in local maintenance - stop some services (ovirt-ha-agent, ovirt-ha-broker, vdsmd) - run yum update that brings hypervisor at 7.3 and also new vdsm and related packages to 4.0.6 level and also qemu-kvm-ev at 2.6.0-27.1.el7 - adjust/merge some rpmnew files (both os in general and ovirt related) - stop again vdsmd (agent and broker remained down) - stop sanlock (here sometimes it goes timeout so I "kill -9" the remaining process otherwise the system is unable to shutdown due to impossibility to umount nfs filesystems In fact in my environment the host itself provides nfs mounts for data storage domain and iso one; the umount problem is only with the data one) - shutdown host and reboot it - exit maintenance - engine vm starts after a while - enter global maintenance again - yum update on engine vm and adjust rpmnew files - shutdown engine vm - exit global maintenance - after a while engine vm starts - power on VMs required. Gianluca

Hi, There's one BIG difference between what you did and what I did... On Sun, January 15, 2017 10:08 am, Gianluca Cecchi wrote:
On Sun, Jan 15, 2017 at 3:39 PM, Derek Atkins <derek@ihtfp.com> wrote:
FWIW, I'm running on a single-host system with hosted-engine.
I made the same update on two single host environments with self hosted engine without any problem. My approach was;
- shutdown all VMs except self hosted engine - put environment in global maintenance - update the self hosted engine environment (with commands: yum update "ovirt-*-setup*" engine-setup )
I did "yum update" and not "yum update "ovirt-*-setup*".. and...
- verify connection to engine web admin gui is still ok and 4.0.6. Engine os at this time is still 7.2
.... I updated the OS to 7.3 in the engine VM. I think that's the root of this bug, having PG restarted from under dwhd. The fact that your engine is still at 7.2 implies you didn't also perform the OS update on the engine. I wanted to do that. (Not sure why you didn't). -derek
- shutdown engine VM - put hypervisor host in local maintenance - stop some services (ovirt-ha-agent, ovirt-ha-broker, vdsmd) - run yum update that brings hypervisor at 7.3 and also new vdsm and related packages to 4.0.6 level and also qemu-kvm-ev at 2.6.0-27.1.el7 - adjust/merge some rpmnew files (both os in general and ovirt related) - stop again vdsmd (agent and broker remained down) - stop sanlock (here sometimes it goes timeout so I "kill -9" the remaining process otherwise the system is unable to shutdown due to impossibility to umount nfs filesystems In fact in my environment the host itself provides nfs mounts for data storage domain and iso one; the umount problem is only with the data one) - shutdown host and reboot it - exit maintenance - engine vm starts after a while - enter global maintenance again - yum update on engine vm and adjust rpmnew files - shutdown engine vm - exit global maintenance - after a while engine vm starts - power on VMs required.
Gianluca
-- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant

On Sun, Jan 15, 2017 at 4:54 PM, Derek Atkins <derek@ihtfp.com> wrote:
- update the self hosted engine environment (with commands: yum update "ovirt-*-setup*" engine-setup )
I did "yum update" and not "yum update "ovirt-*-setup*".. and...
- verify connection to engine web admin gui is still ok and 4.0.6. Engine os at this time is still 7.2
.... I updated the OS to 7.3 in the engine VM.
I think that's the root of this bug, having PG restarted from under dwhd. The fact that your engine is still at 7.2 implies you didn't also perform the OS update on the engine. I wanted to do that. (Not sure why you didn't).
-derek
See below, I did it at the end, after update of the host
- shutdown engine VM - put hypervisor host in local maintenance - stop some services (ovirt-ha-agent, ovirt-ha-broker, vdsmd) - run yum update that brings hypervisor at 7.3 and also new vdsm and related packages to 4.0.6 level and also qemu-kvm-ev at 2.6.0-27.1.el7
here for the host I used the approach of double update: os packages and oVirt packages
- adjust/merge some rpmnew files (both os in general and ovirt related) - stop again vdsmd (agent and broker remained down) - stop sanlock (here sometimes it goes timeout so I "kill -9" the remaining process otherwise the system is unable to shutdown due to impossibility to umount nfs filesystems In fact in my environment the host itself provides nfs mounts for data storage domain and iso one; the umount problem is only with the data one) - shutdown host and reboot it - exit maintenance - engine vm starts after a while - enter global maintenance again - yum update on engine vm and adjust rpmnew files
here I have the step where I update the engine VM genaral os packages from 7.2 to 7.3...
- shutdown engine vm - exit global maintenance - after a while engine vm starts - power on VMs required.
Gianluca
-- Derek Atkins 617-623-3745 derek@ihtfp.com www.ihtfp.com Computer and Internet Security Consultant
participants (7)
-
Derek Atkins
-
Gianluca Cecchi
-
Grundmann, Christian
-
Martin Sivak
-
Michael Watters
-
Sandro Bonazzola
-
Stefano Danzi