ovirt-ha-agent cpu usage

Hello, I have a test machine that is a nuc6 with an i5 and 32G of ram and SSD disks. It is configured as a single host environment with Self Hosted Engine VM. Both host and SHE are CentOS 7.2 and oVirt version is 3.6.6.2-1.el7 I notice that having 3 VMs powered on and making nothing special (engine VM, a CentOS 7 VM and a Fedora 24 VM) the ovirt-ha-agent process on host often spikes its cpu usage. See for example this quick video with top command running on host that reflects what happens continuously. https://drive.google.com/file/d/0BwoPbcrMv8mvYUVRMFlLVmxRdXM/view?usp=sharin... Is it normal that ovirt-ha-agent consumes all this amount of cpu? Going into /var/log/ovirt-hosted-engine-ha/agent.log I see nothing special, only messages of type "INFO". The same for broker.log Thanks, Gianluca

Does the spikes correlates with info messages on extracting the ovf? On Aug 8, 2016 1:33 PM, "Gianluca Cecchi" <gianluca.cecchi@gmail.com> wrote:
Hello, I have a test machine that is a nuc6 with an i5 and 32G of ram and SSD disks. It is configured as a single host environment with Self Hosted Engine VM. Both host and SHE are CentOS 7.2 and oVirt version is 3.6.6.2-1.el7
I notice that having 3 VMs powered on and making nothing special (engine VM, a CentOS 7 VM and a Fedora 24 VM) the ovirt-ha-agent process on host often spikes its cpu usage. See for example this quick video with top command running on host that reflects what happens continuously.
https://drive.google.com/file/d/0BwoPbcrMv8mvYUVRMFlLVmxRdXM/ view?usp=sharing
Is it normal that ovirt-ha-agent consumes all this amount of cpu? Going into /var/log/ovirt-hosted-engine-ha/agent.log I see nothing special, only messages of type "INFO". The same for broker.log
Thanks, Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, Aug 8, 2016 at 1:03 PM, Roy Golan <rgolan@redhat.com> wrote:
Does the spikes correlates with info messages on extracting the ovf?
yes, it seems so and it happens every 14-15 seconds.... These are the lines I see scrolling in agent.log when I notice cpu spikes in ovirt-ha-agent... MainThread::INFO::2016-08-08 15:03:07,815::storage_server::212::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Connecting storage server MainThread::INFO::2016-08-08 15:03:08,144::storage_server::220::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Refreshing the storage domain MainThread::INFO::2016-08-08 15:03:08,705::hosted_engine::685::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Preparing images MainThread::INFO::2016-08-08 15:03:08,705::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images) Preparing images MainThread::INFO::2016-08-08 15:03:09,653::hosted_engine::688::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Reloading vm.conf from the shared storage domain MainThread::INFO::2016-08-08 15:03:09,653::config::205::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2016-08-08 15:03:09,843::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:223d26c2-1668-493c-a322-8054923d135f, volUUID:108a362c-f5a9-440e-8817-1ed8a129afe8 MainThread::INFO::2016-08-08 15:03:10,309::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:12ca2fc6-01f7-41ab-ab22-e75c822ac9b6, volUUID:1a18851e-6858-401c-be6e-af14415034b5 MainThread::INFO::2016-08-08 15:03:10,652::ovf_store::109::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2016-08-08 15:03:10,974::ovf_store::116::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/ovirt01.lutwyn.org: _SHE__DOMAIN/31a9e9fd-8dcb-4475-aac4-09f897ee1b45/images/12ca2fc6-01f7-41ab-ab22-e75c822ac9b6/1a18851e-6858-401c-be6e-af14415034b5 MainThread::INFO::2016-08-08 15:03:11,494::config::225::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert MainThread::INFO::2016-08-08 15:03:11,497::config::230::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE MainThread::INFO::2016-08-08 15:03:11,675::hosted_engine::462::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) Current state EngineUp (score: 3400)

Hi, did you found a solution or cause for this high CPU usage? I have installed the self hosted engine on another server and there is no VM running but ovirt-ha-agent uses heavily the CPU. cheers gregor On 08/08/16 15:09, Gianluca Cecchi wrote:
On Mon, Aug 8, 2016 at 1:03 PM, Roy Golan <rgolan@redhat.com <mailto:rgolan@redhat.com>> wrote:
Does the spikes correlates with info messages on extracting the ovf?
yes, it seems so and it happens every 14-15 seconds....
These are the lines I see scrolling in agent.log when I notice cpu spikes in ovirt-ha-agent...
MainThread::INFO::2016-08-08 15:03:07,815::storage_server::212::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Connecting storage server MainThread::INFO::2016-08-08 15:03:08,144::storage_server::220::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Refreshing the storage domain MainThread::INFO::2016-08-08 15:03:08,705::hosted_engine::685::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Preparing images MainThread::INFO::2016-08-08 15:03:08,705::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images) Preparing images MainThread::INFO::2016-08-08 15:03:09,653::hosted_engine::688::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images) Reloading vm.conf from the shared storage domain MainThread::INFO::2016-08-08 15:03:09,653::config::205::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2016-08-08 15:03:09,843::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:223d26c2-1668-493c-a322-8054923d135f, volUUID:108a362c-f5a9-440e-8817-1ed8a129afe8 MainThread::INFO::2016-08-08 15:03:10,309::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:12ca2fc6-01f7-41ab-ab22-e75c822ac9b6, volUUID:1a18851e-6858-401c-be6e-af14415034b5 MainThread::INFO::2016-08-08 15:03:10,652::ovf_store::109::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2016-08-08 15:03:10,974::ovf_store::116::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/ovirt01.lutwyn.org:_SHE__DOMAIN/31a9e9fd-8dcb-4475-aac4-09f897ee1b45/images/12ca2fc6-01f7-41ab-ab22-e75c822ac9b6/1a18851e-6858-401c-be6e-af14415034b5 MainThread::INFO::2016-08-08 15:03:11,494::config::225::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert MainThread::INFO::2016-08-08 15:03:11,497::config::230::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE MainThread::INFO::2016-08-08 15:03:11,675::hosted_engine::462::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) Current state EngineUp (score: 3400)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Oct 5, 2016 at 9:17 AM, gregor <gregor_forum@catrix.at> wrote:
Hi,
did you found a solution or cause for this high CPU usage? I have installed the self hosted engine on another server and there is no VM running but ovirt-ha-agent uses heavily the CPU.
Yes, it's due to the fact that ovirt-ha-agent periodically reconnects over json rpc and this is CPU intensive since the client has to parse the yaml API specification each time it connects. The issue is tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1349829 - ovirt-ha-agent should reuse json-rpc connections but it depends on: https://bugzilla.redhat.com/show_bug.cgi?id=1376843 - [RFE] Implement a keep-alive with reconnect if needed logic for the python jsonrpc client
cheers gregor
On Mon, Aug 8, 2016 at 1:03 PM, Roy Golan <rgolan@redhat.com <mailto:rgolan@redhat.com>> wrote:
Does the spikes correlates with info messages on extracting the ovf?
yes, it seems so and it happens every 14-15 seconds....
These are the lines I see scrolling in agent.log when I notice cpu spikes in ovirt-ha-agent...
MainThread::INFO::2016-08-08 15:03:07,815::storage_server::212::ovirt_hosted_engine_ha.
Connecting storage server MainThread::INFO::2016-08-08 15:03:08,144::storage_server::220::ovirt_hosted_engine_ha.
On 08/08/16 15:09, Gianluca Cecchi wrote: lib.storage_server.StorageServer::(connect_storage_server) lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain MainThread::INFO::2016-08-08 15:03:08,705::hosted_engine::685::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(_initialize_storage_images) Preparing images MainThread::INFO::2016-08-08 15:03:08,705::image::126::ovirt_hosted_engine_ha.lib. image.Image::(prepare_images) Preparing images MainThread::INFO::2016-08-08 15:03:09,653::hosted_engine::688::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(_initialize_storage_images) Reloading vm.conf from the shared storage domain MainThread::INFO::2016-08-08 15:03:09,653::config::205::ovirt_hosted_engine_ha.agent. hosted_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2016-08-08 15:03:09,843::ovf_store::100::ovirt_hosted_engine_ha.lib. ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:223d26c2-1668-493c-a322-8054923d135f, volUUID:108a362c-f5a9-440e-8817-1ed8a129afe8 MainThread::INFO::2016-08-08 15:03:10,309::ovf_store::100::ovirt_hosted_engine_ha.lib. ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:12ca2fc6-01f7-41ab-ab22-e75c822ac9b6, volUUID:1a18851e-6858-401c-be6e-af14415034b5 MainThread::INFO::2016-08-08 15:03:10,652::ovf_store::109::ovirt_hosted_engine_ha.lib. ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2016-08-08 15:03:10,974::ovf_store::116::ovirt_hosted_engine_ha.lib. ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/ovirt01.lutwyn.org:_SHE__DOMAIN/ 31a9e9fd-8dcb-4475-aac4-09f897ee1b45/images/12ca2fc6- 01f7-41ab-ab22-e75c822ac9b6/1a18851e-6858-401c-be6e-af14415034b5 MainThread::INFO::2016-08-08 15:03:11,494::config::225::ovirt_hosted_engine_ha.agent. hosted_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert MainThread::INFO::2016-08-08 15:03:11,497::config::230::ovirt_hosted_engine_ha.agent. hosted_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE MainThread::INFO::2016-08-08 15:03:11,675::hosted_engine::462::ovirt_hosted_engine_ha. agent.hosted_engine.HostedEngine::(start_monitoring) Current state EngineUp (score: 3400)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Oct 5, 2016 at 9:17 AM, gregor <gregor_forum@catrix.at> wrote:
Hi,
did you found a solution or cause for this high CPU usage? I have installed the self hosted engine on another server and there is no VM running but ovirt-ha-agent uses heavily the CPU.
Yes, it's due to the fact that ovirt-ha-agent periodically reconnects over json rpc and this is CPU intensive since the client has to parse the yaml API specification each time it connects.
Simone, reusing the connection is good idea anyway, but what you describe is a bug in the client library. The library does *not* need to load and parse the schema at all for sending requests to vdsm. The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library. Please file an infra bug about it. Nir
The issue is tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1349829 - ovirt-ha-agent should reuse json-rpc connections but it depends on: https://bugzilla.redhat.com/show_bug.cgi?id=1376843 - [RFE] Implement a keep-alive with reconnect if needed logic for the python jsonrpc client
cheers gregor
On 08/08/16 15:09, Gianluca Cecchi wrote:
On Mon, Aug 8, 2016 at 1:03 PM, Roy Golan <rgolan@redhat.com <mailto:rgolan@redhat.com>> wrote:
Does the spikes correlates with info messages on extracting the ovf?
yes, it seems so and it happens every 14-15 seconds....
These are the lines I see scrolling in agent.log when I notice cpu spikes in ovirt-ha-agent...
MainThread::INFO::2016-08-08 15:03:07,815::storage_server::212::ovirt_hosted_engine_ha.li b.storage_server.StorageServer::(connect_storage_server) Connecting storage server MainThread::INFO::2016-08-08 15:03:08,144::storage_server::220::ovirt_hosted_engine_ha.li b.storage_server.StorageServer::(connect_storage_server) Refreshing the storage domain MainThread::INFO::2016-08-08 15:03:08,705::hosted_engine::685::ovirt_hosted_engine_ha.age nt.hosted_engine.HostedEngine::(_initialize_storage_images) Preparing images MainThread::INFO::2016-08-08 15:03:08,705::image::126::ovirt_hosted_engine_ha.lib.image. Image::(prepare_images) Preparing images MainThread::INFO::2016-08-08 15:03:09,653::hosted_engine::688::ovirt_hosted_engine_ha.age nt.hosted_engine.HostedEngine::(_initialize_storage_images) Reloading vm.conf from the shared storage domain MainThread::INFO::2016-08-08 15:03:09,653::config::205::ovirt_hosted_engine_ha.agent.host ed_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2016-08-08 15:03:09,843::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf .ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:223d26c2-1668-493c-a322-8054923d135f, volUUID:108a362c-f5a9-440e-8817-1ed8a129afe8 MainThread::INFO::2016-08-08 15:03:10,309::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf .ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:12ca2fc6-01f7-41ab-ab22-e75c822ac9b6, volUUID:1a18851e-6858-401c-be6e-af14415034b5 MainThread::INFO::2016-08-08 15:03:10,652::ovf_store::109::ovirt_hosted_engine_ha.lib.ovf .ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2016-08-08 15:03:10,974::ovf_store::116::ovirt_hosted_engine_ha.lib.ovf .ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/ovirt01.lutwyn.org:_SHE__DOMAIN/31a9e9 fd-8dcb-4475-aac4-09f897ee1b45/images/12ca2fc6-01f7-41ab- ab22-e75c822ac9b6/1a18851e-6858-401c-be6e-af14415034b5 MainThread::INFO::2016-08-08 15:03:11,494::config::225::ovirt_hosted_engine_ha.agent.host ed_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert MainThread::INFO::2016-08-08 15:03:11,497::config::230::ovirt_hosted_engine_ha.agent.host ed_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE MainThread::INFO::2016-08-08 15:03:11,675::hosted_engine::462::ovirt_hosted_engine_ha.age nt.hosted_engine.HostedEngine::(start_monitoring) Current state EngineUp (score: 3400)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Oct 5, 2016 at 9:17 AM, gregor <gregor_forum@catrix.at> wrote:
Hi,
did you found a solution or cause for this high CPU usage? I have installed the self hosted engine on another server and there is no VM running but ovirt-ha-agent uses heavily the CPU.
Yes, it's due to the fact that ovirt-ha-agent periodically reconnects over json rpc and this is CPU intensive since the client has to parse the yaml API specification each time it connects.
Simone, reusing the connection is good idea anyway, but what you describe is a bug in the client library. The library does *not* need to load and parse the schema at all for sending requests to vdsm.
The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library.
Please file an infra bug about it.
Done, https://bugzilla.redhat.com/show_bug.cgi?id=1381899 Thanks.
Nir
The issue is tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1349829 - ovirt-ha-agent should reuse json-rpc connections but it depends on: https://bugzilla.redhat.com/show_bug.cgi?id=1376843 - [RFE] Implement a keep-alive with reconnect if needed logic for the python jsonrpc client
cheers gregor
On 08/08/16 15:09, Gianluca Cecchi wrote:
On Mon, Aug 8, 2016 at 1:03 PM, Roy Golan <rgolan@redhat.com <mailto:rgolan@redhat.com>> wrote:
Does the spikes correlates with info messages on extracting the ovf?
yes, it seems so and it happens every 14-15 seconds....
These are the lines I see scrolling in agent.log when I notice cpu spikes in ovirt-ha-agent...
MainThread::INFO::2016-08-08 15:03:07,815::storage_server::212::ovirt_hosted_engine_ha.li b.storage_server.StorageServer::(connect_storage_server) Connecting storage server MainThread::INFO::2016-08-08 15:03:08,144::storage_server::220::ovirt_hosted_engine_ha.li b.storage_server.StorageServer::(connect_storage_server) Refreshing the storage domain MainThread::INFO::2016-08-08 15:03:08,705::hosted_engine::685::ovirt_hosted_engine_ha.age nt.hosted_engine.HostedEngine::(_initialize_storage_images) Preparing images MainThread::INFO::2016-08-08 15:03:08,705::image::126::ovirt_hosted_engine_ha.lib.image.I mage::(prepare_images) Preparing images MainThread::INFO::2016-08-08 15:03:09,653::hosted_engine::688::ovirt_hosted_engine_ha.age nt.hosted_engine.HostedEngine::(_initialize_storage_images) Reloading vm.conf from the shared storage domain MainThread::INFO::2016-08-08 15:03:09,653::config::205::ovirt_hosted_engine_ha.agent.host ed_engine.HostedEngine.config::(refresh_local_conf_file) Trying to get a fresher copy of vm configuration from the OVF_STORE MainThread::INFO::2016-08-08 15:03:09,843::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf .ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:223d26c2-1668-493c-a322-8054923d135f, volUUID:108a362c-f5a9-440e-8817-1ed8a129afe8 MainThread::INFO::2016-08-08 15:03:10,309::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf .ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:12ca2fc6-01f7-41ab-ab22-e75c822ac9b6, volUUID:1a18851e-6858-401c-be6e-af14415034b5 MainThread::INFO::2016-08-08 15:03:10,652::ovf_store::109::ovirt_hosted_engine_ha.lib.ovf .ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE MainThread::INFO::2016-08-08 15:03:10,974::ovf_store::116::ovirt_hosted_engine_ha.lib.ovf .ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/ovirt01.lutwyn.org:_SHE__DOMAIN/31a9e9 fd-8dcb-4475-aac4-09f897ee1b45/images/12ca2fc6-01f7-41ab-ab2 2-e75c822ac9b6/1a18851e-6858-401c-be6e-af14415034b5 MainThread::INFO::2016-08-08 15:03:11,494::config::225::ovirt_hosted_engine_ha.agent.host ed_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert MainThread::INFO::2016-08-08 15:03:11,497::config::230::ovirt_hosted_engine_ha.agent.host ed_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE MainThread::INFO::2016-08-08 15:03:11,675::hosted_engine::462::ovirt_hosted_engine_ha.age nt.hosted_engine.HostedEngine::(start_monitoring) Current state EngineUp (score: 3400)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Oct 5, 2016 at 9:17 AM, gregor <gregor_forum@catrix.at> wrote:
Hi,
did you found a solution or cause for this high CPU usage? I have installed the self hosted engine on another server and there is no VM running but ovirt-ha-agent uses heavily the CPU.
Yes, it's due to the fact that ovirt-ha-agent periodically reconnects over json rpc and this is CPU intensive since the client has to parse the yaml API specification each time it connects.
Simone, reusing the connection is good idea anyway, but what you describe is a bug in the client library. The library does *not* need to load and parse the schema at all for sending requests to vdsm.
The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library.
Please file an infra bug about it.
Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230 Would be nice if it can be tested on the system showing this problem. Cheers, Nir

On 7 Oct 2016, at 14:42, Nir Soffer <nsoffer@redhat.com> wrote: =20 On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi <stirabos@redhat.com = <mailto:stirabos@redhat.com>> wrote: =20 =20 On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer <nsoffer@redhat.com = <mailto:nsoffer@redhat.com>> wrote: On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi = <stirabos@redhat.com <mailto:stirabos@redhat.com>> wrote: =20 =20 On Wed, Oct 5, 2016 at 9:17 AM, gregor <gregor_forum@catrix.at = <mailto:gregor_forum@catrix.at>> wrote: Hi, =20 did you found a solution or cause for this high CPU usage? I have installed the self hosted engine on another server and there is no VM running but ovirt-ha-agent uses heavily the CPU. =20 Yes, it's due to the fact that ovirt-ha-agent periodically reconnects = over json rpc and this is CPU intensive since the client has to parse =
=20 Simone, reusing the connection is good idea anyway, but what you = describe is=20 a bug in the client library. The library does *not* need to load and =
--Apple-Mail=_7EB9EF8C-BE5A-48FF-82EF-9D77E5D03B01 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 the yaml API specification each time it connects. wasn=E2=80=99t it suppose to be fixed to reuse the connection? Like all = the other clients (vdsm migration code:-)=20 Does schema validation matter then if there would be only one connection = at the start up? parse the
schema at all for sending requests to vdsm. =20 The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library. =20 Please file an infra bug about it. =20 Done, https://bugzilla.redhat.com/show_bug.cgi?id=3D1381899 = <https://bugzilla.redhat.com/show_bug.cgi?id=3D1381899> =20 Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230 <https://gerrit.ovirt.org/65230> =20 Would be nice if it can be tested on the system showing this problem. =20 Cheers, Nir _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_7EB9EF8C-BE5A-48FF-82EF-9D77E5D03B01 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><br class=3D""><div><blockquote type=3D"cite" class=3D""><div = class=3D"">On 7 Oct 2016, at 14:42, Nir Soffer <<a = href=3D"mailto:nsoffer@redhat.com" class=3D"">nsoffer@redhat.com</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote">On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi = <span dir=3D"ltr" class=3D""><<a href=3D"mailto:stirabos@redhat.com" = target=3D"_blank" class=3D"">stirabos@redhat.com</a>></span> = wrote:<br class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px= 0px 0px 0.8ex;border-left:1px solid = rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr" class=3D""><br = class=3D""><div class=3D"gmail_extra"><br class=3D""><div = class=3D"gmail_quote"><span class=3D"gmail-">On Wed, Oct 5, 2016 at = 10:34 AM, Nir Soffer <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:nsoffer@redhat.com" target=3D"_blank" = class=3D"">nsoffer@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><span class=3D"">On Wed, Oct 5, 2016 at 10:24 AM, = Simone Tiraboschi <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:stirabos@redhat.com" target=3D"_blank" = class=3D"">stirabos@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div = dir=3D"ltr" class=3D""><br class=3D""><div class=3D"gmail_extra"><br = class=3D""><div class=3D"gmail_quote"><span class=3D"">On Wed, Oct 5, = 2016 at 9:17 AM, gregor <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:gregor_forum@catrix.at" target=3D"_blank" = class=3D"">gregor_forum@catrix.at</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br = class=3D""> <br class=3D""> did you found a solution or cause for this high CPU usage?<br class=3D""> I have installed the self hosted engine on another server and there = is<br class=3D""> no VM running but ovirt-ha-agent uses heavily the CPU.<br = class=3D""></blockquote><div class=3D""><br class=3D""></div></span>Yes, = it's due to the fact that ovirt-ha-agent periodically reconnects over = json rpc and this is CPU intensive since the client has to parse the = yaml API specification each time it connects.<br = class=3D""></div></div></div></blockquote></span></div></div></div></block= quote></span></div></div></div></blockquote></div></div></div></div></bloc= kquote><div><br class=3D""></div>wasn=E2=80=99t it suppose to be fixed = to reuse the connection? Like all the other clients (vdsm migration = code:-) </div><div>Does schema validation matter then if there = would be only one connection at the start up?</div><div><br = class=3D""><blockquote type=3D"cite" class=3D""><div class=3D""><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><blockquote class=3D"gmail_quote" = style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid = rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr" class=3D""><div = class=3D"gmail_extra"><div class=3D"gmail_quote"><span = class=3D"gmail-"><blockquote class=3D"gmail_quote" style=3D"margin:0px = 0px 0px 0.8ex;border-left:1px solid = rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr" class=3D""><div = class=3D"gmail_extra"><div class=3D"gmail_quote"><span class=3D""><div = class=3D""><br class=3D""></div></span><div class=3D"">Simone, reusing = the connection is good idea anyway, but what you describe = is </div><div class=3D"">a bug in the client library. The library = does *not* need to load and parse the</div><div class=3D"">schema at all = for sending requests to vdsm.</div><div class=3D""><br = class=3D""></div><div class=3D"">The schema is only needed if you want = to verify request parameters,</div><div class=3D"">or provide online = help, these are not needed in a client library.</div><div class=3D""><br = class=3D""></div><div class=3D"">Please file an infra bug about = it.</div></div></div></div></blockquote><div class=3D""><br = class=3D""></div></span><div class=3D"">Done, <a = href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1381899" = target=3D"_blank" class=3D"">https://bugzilla.redhat.com/<wbr = class=3D"">show_bug.cgi?id=3D1381899</a></div></div></div></div></blockquo= te><div class=3D""><br class=3D""></div><div class=3D"">Here is a patch = that should eliminate most most of the problem:</div><div class=3D""><a = href=3D"https://gerrit.ovirt.org/65230" = class=3D"">https://gerrit.ovirt.org/65230</a><br class=3D""></div><div = class=3D""><br class=3D""></div><div class=3D"">Would be nice if it can = be tested on the system showing this problem.</div><div class=3D""><br = class=3D""></div><div class=3D"">Cheers,</div><div = class=3D"">Nir</div></div></div></div> _______________________________________________<br class=3D"">Users = mailing list<br class=3D""><a href=3D"mailto:Users@ovirt.org" = class=3D"">Users@ovirt.org</a><br = class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br = class=3D""></div></blockquote></div><br class=3D""></body></html>= --Apple-Mail=_7EB9EF8C-BE5A-48FF-82EF-9D77E5D03B01--

On Fri, Oct 7, 2016 at 3:52 PM, Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 7 Oct 2016, at 14:42, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Oct 5, 2016 at 9:17 AM, gregor <gregor_forum@catrix.at> wrote:
Hi,
did you found a solution or cause for this high CPU usage? I have installed the self hosted engine on another server and there is no VM running but ovirt-ha-agent uses heavily the CPU.
Yes, it's due to the fact that ovirt-ha-agent periodically reconnects over json rpc and this is CPU intensive since the client has to parse the yaml API specification each time it connects.
wasn’t it suppose to be fixed to reuse the connection? Like all the other clients (vdsm migration code:-)
This is orthogonal issue.
Does schema validation matter then if there would be only one connection at the start up?
Loading once does not help command line tools like vdsClient, hosted-engine and vdsm-tool. Nir
Simone, reusing the connection is good idea anyway, but what you describe is a bug in the client library. The library does *not* need to load and parse the schema at all for sending requests to vdsm.
The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library.
Please file an infra bug about it.
Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230
Would be nice if it can be tested on the system showing this problem.
Cheers, Nir _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Fri, Oct 7, 2016 at 2:59 PM, Nir Soffer <nsoffer@redhat.com> wrote:
wasn’t it suppose to be fixed to reuse the connection? Like all the other clients (vdsm migration code:-)
This is orthogonal issue.
Does schema validation matter then if there would be only one connection at the start up?
Loading once does not help command line tools like vdsClient, hosted-engine and vdsm-tool.
Nir
Simone, reusing the connection is good idea anyway, but what you describe is a bug in the client library. The library does *not* need to load and parse the schema at all for sending requests to vdsm.
The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library.
Please file an infra bug about it.
Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230
Would be nice if it can be tested on the system showing this problem.
Cheers, Nir _______________________________________________
this is a video of 1 minute with the same system as the first post, but in 4.0.3 now and the same 3 VMs powered on without any particular load. It seems very similar to the previous 3.6.6 in cpu used by ovirt-ha-agent. https://drive.google.com/file/d/0BwoPbcrMv8mvSjFDUERzV1owTG8/view?usp=sharin... Enjoy Nir ;-) If I can apply the patch also to 4.0.3 I'm going to see if there is then a different behavior. Let me know, Gianluca

On Fri, Oct 7, 2016 at 3:22 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Oct 7, 2016 at 2:59 PM, Nir Soffer <nsoffer@redhat.com> wrote:
wasn’t it suppose to be fixed to reuse the connection? Like all the other clients (vdsm migration code:-)
This is orthogonal issue.
Does schema validation matter then if there would be only one connection at the start up?
Loading once does not help command line tools like vdsClient, hosted-engine and vdsm-tool.
Nir
Simone, reusing the connection is good idea anyway, but what you describe is a bug in the client library. The library does *not* need to load and parse the schema at all for sending requests to vdsm.
The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library.
Please file an infra bug about it.
Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230
Would be nice if it can be tested on the system showing this problem.
Cheers, Nir _______________________________________________
this is a video of 1 minute with the same system as the first post, but in 4.0.3 now and the same 3 VMs powered on without any particular load. It seems very similar to the previous 3.6.6 in cpu used by ovirt-ha-agent.
https://drive.google.com/file/d/0BwoPbcrMv8mvSjFDUERzV1owTG8/ view?usp=sharing
Enjoy Nir ;-)
If I can apply the patch also to 4.0.3 I'm going to see if there is then a different behavior. Let me know,
I'm trying it right now. Any other tests will be really appreciated. The patch is pretty simply, you can apply that on the fly. You have to shutdown ovirt-ha-broker and ovirt-ha-agent; then you could directly edit /usr/lib/python2.7/site-packages/api/vdsmapi.py around line 97 changing from loaded_schema = yaml.load(f) to loaded_schema = yaml.load(f, Loader=yaml.CLoader) Please pay attention to keep exactly the same amount of initial spaces. Then you can simply restart the HA agent and check. Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Fri, Oct 7, 2016 at 3:35 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Fri, Oct 7, 2016 at 3:22 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com
wrote:
On Fri, Oct 7, 2016 at 2:59 PM, Nir Soffer <nsoffer@redhat.com> wrote:
wasn’t it suppose to be fixed to reuse the connection? Like all the other clients (vdsm migration code:-)
This is orthogonal issue.
Does schema validation matter then if there would be only one connection at the start up?
Loading once does not help command line tools like vdsClient, hosted-engine and vdsm-tool.
Nir
Simone, reusing the connection is good idea anyway, but what you describe is a bug in the client library. The library does *not* need to load and parse the schema at all for sending requests to vdsm.
The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library.
Please file an infra bug about it.
Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230
Would be nice if it can be tested on the system showing this problem.
Cheers, Nir _______________________________________________
this is a video of 1 minute with the same system as the first post, but in 4.0.3 now and the same 3 VMs powered on without any particular load. It seems very similar to the previous 3.6.6 in cpu used by ovirt-ha-agent.
https://drive.google.com/file/d/0BwoPbcrMv8mvSjFDUERzV1owTG8 /view?usp=sharing
Enjoy Nir ;-)
If I can apply the patch also to 4.0.3 I'm going to see if there is then a different behavior. Let me know,
I'm trying it right now. Any other tests will be really appreciated.
The patch is pretty simply, you can apply that on the fly. You have to shutdown ovirt-ha-broker and ovirt-ha-agent; then you could directly edit /usr/lib/python2.7/site-packages/api/vdsmapi.py around line 97 changing from loaded_schema = yaml.load(f) to loaded_schema = yaml.load(f, Loader=yaml.CLoader) Please pay attention to keep exactly the same amount of initial spaces.
Then you can simply restart the HA agent and check.
Gianluca
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
What I've done (I didn't read your answer in between and this is a test system not so important... ) set to global maintenance patch vdsmapi.py restart vdsmd restart ovirt-ha-agent set maintenance to none And a bright new 3-minutes video here: https://drive.google.com/file/d/0BwoPbcrMv8mvVzBPUVRQa1pwVnc/view?usp=sharin... It seems that now ovirt-ha-agent or is not present in top cpu process or at least has ranges between 5% and 12% and not more.... BTW: I see this in vdsm status now [root@ovirt01 api]# systemctl status vdsmd ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2016-10-07 15:30:57 CEST; 32min ago Process: 20883 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS) Process: 20886 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS) Main PID: 21023 (vdsm) CGroup: /system.slice/vdsmd.service ├─21023 /usr/bin/python /usr/share/vdsm/vdsm ├─21117 /usr/libexec/ioprocess --read-pipe-fd 41 --write-pipe-fd 40 --max-threads 10 --max-queue... ├─21123 /usr/libexec/ioprocess --read-pipe-fd 48 --write-pipe-fd 46 --max-threads 10 --max-queue... ├─21134 /usr/libexec/ioprocess --read-pipe-fd 57 --write-pipe-fd 56 --max-threads 10 --max-queue... ├─21143 /usr/libexec/ioprocess --read-pipe-fd 65 --write-pipe-fd 64 --max-threads 10 --max-queue... ├─21149 /usr/libexec/ioprocess --read-pipe-fd 73 --write-pipe-fd 72 --max-threads 10 --max-queue... ├─21156 /usr/libexec/ioprocess --read-pipe-fd 80 --write-pipe-fd 78 --max-threads 10 --max-queue... ├─21177 /usr/libexec/ioprocess --read-pipe-fd 88 --write-pipe-fd 87 --max-threads 10 --max-queue... ├─21204 /usr/libexec/ioprocess --read-pipe-fd 99 --write-pipe-fd 98 --max-threads 10 --max-queue... └─21239 /usr/libexec/ioprocess --read-pipe-fd 111 --write-pipe-fd 110 --max-threads 10 --max-que... Oct 07 16:02:52 ovirt01.lutwyn.org vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:02:54 ovirt01.lutwyn.org vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:02:56 ovirt01.lutwyn.org vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:02:58 ovirt01.lutwyn.org vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:11 ovirt01.lutwyn.org vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:15 ovirt01.lutwyn.org vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:15 ovirt01.lutwyn.org vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:18 ovirt01.lutwyn.org vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:20 ovirt01.lutwyn.org vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:22 ovirt01.lutwyn.org vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Hint: Some lines were ellipsized, use -l to show in full. [root@ovirt01 api]# Now I also restarted ovirt-ha-broker just in case

And a bright new 3-minutes video here: https://drive.google.com/file/d/0BwoPbcrMv8mvVzBPUVRQa1pwVnc/ view?usp=sharing
It seems that now ovirt-ha-agent or is not present in top cpu process or at least has ranges between 5% and 12% and not more....
5-12% is probably 10 times more than needed for the agent, profiling should tell us where time is spent. Since the agent depends on vdsm, we can reuse vdsm cpu profiler, see lib/vdsm/profiling/cpu.py. It is not ready yet to be used by other applications, but making it more general should be easy. Nir

This is a multi-part message in MIME format. --------------AF2BBEDF3BE980A51607346C Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit it worked for me as well - load avg. < 1 now, ovirt-ha-agent pops up in top periodically, but not on top using 100% CPU all the time anymore. thanks! sam Gianluca Cecchi wrote on 10/7/2016 10:13 AM:
On Fri, Oct 7, 2016 at 3:35 PM, Simone Tiraboschi <stirabos@redhat.com <mailto:stirabos@redhat.com>> wrote:
On Fri, Oct 7, 2016 at 3:22 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com <mailto:gianluca.cecchi@gmail.com>> wrote:
On Fri, Oct 7, 2016 at 2:59 PM, Nir Soffer <nsoffer@redhat.com <mailto:nsoffer@redhat.com>> wrote:
wasn’t it suppose to be fixed to reuse the connection? Like all the other clients (vdsm migration code:-)
This is orthogonal issue.
Does schema validation matter then if there would be only one connection at the start up?
Loading once does not help command line tools like vdsClient, hosted-engine and vdsm-tool.
Nir
Simone, reusing the connection is good idea anyway, but what you describe is a bug in the client library. The library does *not* need to load and parse the schema at all for sending requests to vdsm.
The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library.
Please file an infra bug about it.
Done, https://bugzilla.redhat.com/show_bug.cgi?id=1381899 <https://bugzilla.redhat.com/show_bug.cgi?id=1381899>
Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230
Would be nice if it can be tested on the system showing this problem.
Cheers, Nir _______________________________________________
this is a video of 1 minute with the same system as the first post, but in 4.0.3 now and the same 3 VMs powered on without any particular load. It seems very similar to the previous 3.6.6 in cpu used by ovirt-ha-agent.
https://drive.google.com/file/d/0BwoPbcrMv8mvSjFDUERzV1owTG8/view?usp=sharin... <https://drive.google.com/file/d/0BwoPbcrMv8mvSjFDUERzV1owTG8/view?usp=sharing>
Enjoy Nir ;-)
If I can apply the patch also to 4.0.3 I'm going to see if there is then a different behavior. Let me know,
I'm trying it right now. Any other tests will be really appreciated.
The patch is pretty simply, you can apply that on the fly. You have to shutdown ovirt-ha-broker and ovirt-ha-agent; then you could directly edit /usr/lib/python2.7/site-packages/api/vdsmapi.py around line 97 changing from loaded_schema = yaml.load(f) to loaded_schema = yaml.load(f, Loader=yaml.CLoader) Please pay attention to keep exactly the same amount of initial spaces.
Then you can simply restart the HA agent and check.
Gianluca
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
What I've done (I didn't read your answer in between and this is a test system not so important... )
set to global maintenance patch vdsmapi.py restart vdsmd restart ovirt-ha-agent set maintenance to none
And a bright new 3-minutes video here: https://drive.google.com/file/d/0BwoPbcrMv8mvVzBPUVRQa1pwVnc/view?usp=sharin...
It seems that now ovirt-ha-agent or is not present in top cpu process or at least has ranges between 5% and 12% and not more....
BTW: I see this in vdsm status now
[root@ovirt01 api]# systemctl status vdsmd ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2016-10-07 15:30:57 CEST; 32min ago Process: 20883 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS) Process: 20886 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS) Main PID: 21023 (vdsm) CGroup: /system.slice/vdsmd.service ├─21023 /usr/bin/python /usr/share/vdsm/vdsm ├─21117 /usr/libexec/ioprocess --read-pipe-fd 41 --write-pipe-fd 40 --max-threads 10 --max-queue... ├─21123 /usr/libexec/ioprocess --read-pipe-fd 48 --write-pipe-fd 46 --max-threads 10 --max-queue... ├─21134 /usr/libexec/ioprocess --read-pipe-fd 57 --write-pipe-fd 56 --max-threads 10 --max-queue... ├─21143 /usr/libexec/ioprocess --read-pipe-fd 65 --write-pipe-fd 64 --max-threads 10 --max-queue... ├─21149 /usr/libexec/ioprocess --read-pipe-fd 73 --write-pipe-fd 72 --max-threads 10 --max-queue... ├─21156 /usr/libexec/ioprocess --read-pipe-fd 80 --write-pipe-fd 78 --max-threads 10 --max-queue... ├─21177 /usr/libexec/ioprocess --read-pipe-fd 88 --write-pipe-fd 87 --max-threads 10 --max-queue... ├─21204 /usr/libexec/ioprocess --read-pipe-fd 99 --write-pipe-fd 98 --max-threads 10 --max-queue... └─21239 /usr/libexec/ioprocess --read-pipe-fd 111 --write-pipe-fd 110 --max-threads 10 --max-que...
Oct 07 16:02:52 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:02:54 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:02:56 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:02:58 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:11 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:15 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:15 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:18 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:20 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Oct 07 16:03:22 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof Hint: Some lines were ellipsized, use -l to show in full. [root@ovirt01 api]#
Now I also restarted ovirt-ha-broker just in case
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------AF2BBEDF3BE980A51607346C Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> it worked for me as well - load avg. < 1 now, ovirt-ha-agent pops up in top periodically, but not on top using 100% CPU all the time anymore.<br> thanks!<br> sam<br> <br> <div class="moz-cite-prefix">Gianluca Cecchi wrote on 10/7/2016 10:13 AM:<br> </div> <blockquote cite="mid:CAG2kNCzOviMzgm=NYArWKROQen5BJDudQ0AES8xr+qvqAHFzMg@mail.gmail.com" type="cite"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote">On Fri, Oct 7, 2016 at 3:35 PM, Simone Tiraboschi <span dir="ltr"><<a moz-do-not-send="true" href="mailto:stirabos@redhat.com" target="_blank">stirabos@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div dir="ltr"><br> <div class="gmail_extra"><br> <div class="gmail_quote"> <div> <div class="gmail-h5">On Fri, Oct 7, 2016 at 3:22 PM, Gianluca Cecchi <span dir="ltr"><<a moz-do-not-send="true" href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div dir="ltr"><span class="gmail-m_2782938339651103979gmail-"> <div class="gmail_extra"> <div class="gmail_quote">On Fri, Oct 7, 2016 at 2:59 PM, Nir Soffer <span dir="ltr"><<a moz-do-not-send="true" href="mailto:nsoffer@redhat.com" target="_blank">nsoffer@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"><span> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div style="word-wrap:break-word"> <div><span> <div><br> </div> </span>wasn’t it suppose to be fixed to reuse the connection? Like all the other clients (vdsm migration code:-) </div> </div> </blockquote> <div><br> </div> </span> <div>This is orthogonal issue.</div> <span> <div> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div style="word-wrap:break-word"> <div>Does schema validation matter then if there would be only one connection at the start up?</div> </div> </blockquote> <div><br> </div> </span> <div>Loading once does not help command line tools like vdsClient, hosted-engine and</div> <div>vdsm-tool. </div> <span><font color="#888888"> <div><br> </div> <div>Nir</div> </font></span><span> <div> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div style="word-wrap:break-word"> <div><br> <blockquote type="cite"> <div><span> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"><span> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"><span> <div><br> </div> </span> <div>Simone, reusing the connection is good idea anyway, but what you describe is </div> <div>a bug in the client library. The library does *not* need to load and parse the</div> <div>schema at all for sending requests to vdsm.</div> <div><br> </div> <div>The schema is only needed if you want to verify request parameters,</div> <div>or provide online help, these are not needed in a client library.</div> <div><br> </div> <div>Please file an infra bug about it.</div> </div> </div> </div> </blockquote> <div><br> </div> </span> <div>Done, <a moz-do-not-send="true" href="https://bugzilla.redhat.com/show_bug.cgi?id=1381899" target="_blank">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=1381899</a></div> </div> </div> </div> </blockquote> <div><br> </div> <div>Here is a patch that should eliminate most most of the problem:</div> <div><a moz-do-not-send="true" href="https://gerrit.ovirt.org/65230" target="_blank">https://gerrit.ovirt.org/65230</a><br> </div> <div><br> </div> <div>Would be nice if it can be tested on the system showing this problem.</div> <div><br> </div> <div>Cheers,</div> <div>Nir</div> </div> </div> </div> </span><span> ______________________________<wbr>_________________<br> </span></div> </blockquote> </div> </div> </blockquote> </span></div> </div> </div> </blockquote> </div> <br> </div> <div class="gmail_extra"><br> </div> </span> <div class="gmail_extra">this is a video of 1 minute with the same system as the first post, but in 4.0.3 now and the same 3 VMs powered on without any particular load.</div> <div class="gmail_extra">It seems very similar to the previous 3.6.6 in cpu used by ovirt-ha-agent.</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra"><a moz-do-not-send="true" href="https://drive.google.com/file/d/0BwoPbcrMv8mvSjFDUERzV1owTG8/view?usp=sharin..." target="_blank">https://drive.google.com/file/<wbr>d/0BwoPbcrMv8mvSjFDUERzV1owTG8<wbr>/view?usp=sharing</a><br> </div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">Enjoy Nir ;-)<br> </div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">If I can apply the patch also to 4.0.3 I'm going to see if there is then a different behavior. </div> <div class="gmail_extra">Let me know,</div> <div class="gmail_extra"><br> </div> </div> </blockquote> <div><br> </div> </div> </div> <div>I'm trying it right now.</div> <div>Any other tests will be really appreciated.</div> <div><br> </div> <div>The patch is pretty simply, you can apply that on the fly.</div> <div>You have to shutdown ovirt-ha-broker and ovirt-ha-agent; then you could directly edit </div> <div>/usr/lib/python2.7/site-<wbr>packages/api/vdsmapi.py<br> </div> <div>around line 97 changing from</div> <div>loaded_schema = yaml.load(f)<br> </div> <div>to</div> <div>loaded_schema = yaml.load(f, Loader=yaml.CLoader)<br> </div> <div>Please pay attention to keep exactly the same amount of initial spaces.<br> </div> <div><br> </div> <div>Then you can simply restart the HA agent and check.</div> <div><br> </div> <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> <div dir="ltr"> <div class="gmail_extra">Gianluca</div> </div> <span class="gmail-"> <br> ______________________________<wbr>_________________<br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br> <br> </span></blockquote> </div> <br> </div> </div> </blockquote> </div> <br> </div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">What I've done (I didn't read your answer in between and this is a test system not so important... )</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">set to global maintenance</div> <div class="gmail_extra">patch vdsmapi.py</div> <div class="gmail_extra">restart vdsmd </div> <div class="gmail_extra">restart ovirt-ha-agent</div> <div class="gmail_extra">set maintenance to none</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">And a bright new 3-minutes video here:</div> <div class="gmail_extra"><a moz-do-not-send="true" href="https://drive.google.com/file/d/0BwoPbcrMv8mvVzBPUVRQa1pwVnc/view?usp=sharing">https://drive.google.com/file/d/0BwoPbcrMv8mvVzBPUVRQa1pwVnc/view?usp=sharing</a><br> </div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">It seems that now ovirt-ha-agent or is not present in top cpu process or at least has ranges between 5% and 12% and not more....</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">BTW: I see this in vdsm status now </div> <div class="gmail_extra"><br> </div> <div class="gmail_extra"> <div class="gmail_extra">[root@ovirt01 api]# systemctl status vdsmd </div> <div class="gmail_extra">● vdsmd.service - Virtual Desktop Server Manager</div> <div class="gmail_extra"> Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled)</div> <div class="gmail_extra"> Active: active (running) since Fri 2016-10-07 15:30:57 CEST; 32min ago</div> <div class="gmail_extra"> Process: 20883 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)</div> <div class="gmail_extra"> Process: 20886 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)</div> <div class="gmail_extra"> Main PID: 21023 (vdsm)</div> <div class="gmail_extra"> CGroup: /system.slice/vdsmd.service</div> <div class="gmail_extra"> ├─21023 /usr/bin/python /usr/share/vdsm/vdsm</div> <div class="gmail_extra"> ├─21117 /usr/libexec/ioprocess --read-pipe-fd 41 --write-pipe-fd 40 --max-threads 10 --max-queue...</div> <div class="gmail_extra"> ├─21123 /usr/libexec/ioprocess --read-pipe-fd 48 --write-pipe-fd 46 --max-threads 10 --max-queue...</div> <div class="gmail_extra"> ├─21134 /usr/libexec/ioprocess --read-pipe-fd 57 --write-pipe-fd 56 --max-threads 10 --max-queue...</div> <div class="gmail_extra"> ├─21143 /usr/libexec/ioprocess --read-pipe-fd 65 --write-pipe-fd 64 --max-threads 10 --max-queue...</div> <div class="gmail_extra"> ├─21149 /usr/libexec/ioprocess --read-pipe-fd 73 --write-pipe-fd 72 --max-threads 10 --max-queue...</div> <div class="gmail_extra"> ├─21156 /usr/libexec/ioprocess --read-pipe-fd 80 --write-pipe-fd 78 --max-threads 10 --max-queue...</div> <div class="gmail_extra"> ├─21177 /usr/libexec/ioprocess --read-pipe-fd 88 --write-pipe-fd 87 --max-threads 10 --max-queue...</div> <div class="gmail_extra"> ├─21204 /usr/libexec/ioprocess --read-pipe-fd 99 --write-pipe-fd 98 --max-threads 10 --max-queue...</div> <div class="gmail_extra"> └─21239 /usr/libexec/ioprocess --read-pipe-fd 111 --write-pipe-fd 110 --max-threads 10 --max-que...</div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">Oct 07 16:02:52 <a moz-do-not-send="true" href="http://ovirt01.lutwyn.org">ovirt01.lutwyn.org</a> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof</div> <div class="gmail_extra">Oct 07 16:02:54 <a moz-do-not-send="true" href="http://ovirt01.lutwyn.org">ovirt01.lutwyn.org</a> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof</div> <div class="gmail_extra">Oct 07 16:02:56 <a moz-do-not-send="true" href="http://ovirt01.lutwyn.org">ovirt01.lutwyn.org</a> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof</div> <div class="gmail_extra">Oct 07 16:02:58 <a moz-do-not-send="true" href="http://ovirt01.lutwyn.org">ovirt01.lutwyn.org</a> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof</div> <div class="gmail_extra">Oct 07 16:03:11 <a moz-do-not-send="true" href="http://ovirt01.lutwyn.org">ovirt01.lutwyn.org</a> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof</div> <div class="gmail_extra">Oct 07 16:03:15 <a moz-do-not-send="true" href="http://ovirt01.lutwyn.org">ovirt01.lutwyn.org</a> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof</div> <div class="gmail_extra">Oct 07 16:03:15 <a moz-do-not-send="true" href="http://ovirt01.lutwyn.org">ovirt01.lutwyn.org</a> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof</div> <div class="gmail_extra">Oct 07 16:03:18 <a moz-do-not-send="true" href="http://ovirt01.lutwyn.org">ovirt01.lutwyn.org</a> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof</div> <div class="gmail_extra">Oct 07 16:03:20 <a moz-do-not-send="true" href="http://ovirt01.lutwyn.org">ovirt01.lutwyn.org</a> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof</div> <div class="gmail_extra">Oct 07 16:03:22 <a moz-do-not-send="true" href="http://ovirt01.lutwyn.org">ovirt01.lutwyn.org</a> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading data... eof</div> <div class="gmail_extra">Hint: Some lines were ellipsized, use -l to show in full.</div> <div class="gmail_extra">[root@ovirt01 api]# </div> <div class="gmail_extra"><br> </div> <div class="gmail_extra">Now I also restarted ovirt-ha-broker just in case</div> <div class="gmail_extra"><br> </div> </div> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------AF2BBEDF3BE980A51607346C--

On Fri, Oct 7, 2016 at 3:35 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
If I can apply the patch also to 4.0.3 I'm going to see if there is then a different behavior. Let me know,
I'm trying it right now. Any other tests will be really appreciated.
The patch is pretty simply, you can apply that on the fly. You have to shutdown ovirt-ha-broker and ovirt-ha-agent; then you could directly edit /usr/lib/python2.7/site-packages/api/vdsmapi.py around line 97 changing from loaded_schema = yaml.load(f) to loaded_schema = yaml.load(f, Loader=yaml.CLoader) Please pay attention to keep exactly the same amount of initial spaces.
Then you can simply restart the HA agent and check.
Hello, I'm again registering high spikes of ovirt-ha-agent with only 2-3 VMs up and with almost no activity The package of the involved file /usr/lib/python2.7/site-packages/api/vdsmapi.py is now at veriosn vdsm-api-4.19.4-1.el7.centos.noarch and I see that the file contains this kind of lines 129 try: 130 for path in paths: 131 with open(path) as f: 132 if hasattr(yaml, 'CLoader'): 133 loader = yaml.CLoader 134 else: 135 loader = yaml.Loader 136 loaded_schema = yaml.load(f, Loader=loader) 137 138 types = loaded_schema.pop('types') 139 self._types.update(types) 140 self._methods.update(loaded_schema) 141 except EnvironmentError: 142 raise SchemaNotFound("Unable to find API schema file") So there is a conditional statement... How can I be sure that "loader" is set to "yaml.CLoader" that was what in 4.0 was able to lower the cpu usage of ovirt-ha-agent? Thanks, Gianluca

On Tue, Apr 25, 2017 at 12:59 AM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Fri, Oct 7, 2016 at 3:35 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
If I can apply the patch also to 4.0.3 I'm going to see if there is then a different behavior. Let me know,
I'm trying it right now. Any other tests will be really appreciated.
The patch is pretty simply, you can apply that on the fly. You have to shutdown ovirt-ha-broker and ovirt-ha-agent; then you could directly edit /usr/lib/python2.7/site-packages/api/vdsmapi.py around line 97 changing from loaded_schema = yaml.load(f) to loaded_schema = yaml.load(f, Loader=yaml.CLoader) Please pay attention to keep exactly the same amount of initial spaces.
Then you can simply restart the HA agent and check.
Hello, I'm again registering high spikes of ovirt-ha-agent with only 2-3 VMs up and with almost no activity The package of the involved file /usr/lib/python2.7/site-packages/api/vdsmapi.py is now at veriosn vdsm-api-4.19.4-1.el7.centos.noarch and I see that the file contains this kind of lines
129 try: 130 for path in paths: 131 with open(path) as f: 132 if hasattr(yaml, 'CLoader'): 133 loader = yaml.CLoader 134 else: 135 loader = yaml.Loader 136 loaded_schema = yaml.load(f, Loader=loader) 137 138 types = loaded_schema.pop('types') 139 self._types.update(types) 140 self._methods.update(loaded_schema) 141 except EnvironmentError: 142 raise SchemaNotFound("Unable to find API schema file")
So there is a conditional statement... How can I be sure that "loader" is set to "yaml.CLoader" that was what in 4.0 was able to lower the cpu usage of ovirt-ha-agent?
Hi Gianluca, You can run this on the host: $ python -c "import yaml; print 'CLoader:', hasattr(yaml, 'CLoader')" CLoader: True If you get "CLoader: False", you have some packaging issue, CLoader is available on all supported platforms. Nir
Thanks, Gianluca

On Tue, Apr 25, 2017 at 11:28 PM, Nir Soffer <nsoffer@redhat.com> wrote:
Hi Gianluca,
You can run this on the host:
$ python -c "import yaml; print 'CLoader:', hasattr(yaml, 'CLoader')" CLoader: True
If you get "CLoader: False", you have some packaging issue, CLoader is available on all supported platforms.
Nir
Thanks, Gianluca
It seems ok. [root@ovirt01 ovirt-hosted-engine-ha]# python -c "import yaml; print 'CLoader:', hasattr(yaml, 'CLoader')" CLoader: True [root@ovirt01 ovirt-hosted-engine-ha]# Anyway see here a sample of the spikes that it cntinues to have.. from 15% to 55% many times https://drive.google.com/file/d/0BwoPbcrMv8mvMy1xVUE3YzI2YVE/view?usp=sharin... The host is an Intel NUC6i5 with 32Gb of ram. There are the engine, an F25 guest and a C7 desktop VMs running, without doing almost anything. Gianluca

On Wed, Apr 26, 2017 at 11:36 AM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Tue, Apr 25, 2017 at 11:28 PM, Nir Soffer <nsoffer@redhat.com> wrote:
Hi Gianluca,
You can run this on the host:
$ python -c "import yaml; print 'CLoader:', hasattr(yaml, 'CLoader')" CLoader: True
If you get "CLoader: False", you have some packaging issue, CLoader is available on all supported platforms.
Nir
Thanks, Gianluca
It seems ok.
[root@ovirt01 ovirt-hosted-engine-ha]# python -c "import yaml; print 'CLoader:', hasattr(yaml, 'CLoader')" CLoader: True [root@ovirt01 ovirt-hosted-engine-ha]#
Anyway see here a sample of the spikes that it cntinues to have.. from 15% to 55% many times
https://drive.google.com/file/d/0BwoPbcrMv8mvMy1xVUE3YzI2YVE/view?usp=sharin...
There are two issues in this video: - Memory leak, ovirt-ha-agent is using 1g of memory. It is very unlikely that it needs so much memory. - Unusual cpu usage - but not the kind of usage related to yaml parsing. I would open two bugs for this. We have seen the first issue few month ago, and we did nothing about it so the memory leak was not fixed. To understand the unusual cpu usage, we need to integrate yappi into ovirt-ha-agent, and do some profiling to understand where cpu time is spent. Simone, can you do something based on these patches? https://gerrit.ovirt.org/#/q/topic:generic-profiler I hope to get these patches merged soon. Nir
The host is an Intel NUC6i5 with 32Gb of ram. There are the engine, an F25 guest and a C7 desktop VMs running, without doing almost anything.
Gianluca

On Wed, Apr 26, 2017 at 12:52 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Apr 26, 2017 at 11:36 AM Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Tue, Apr 25, 2017 at 11:28 PM, Nir Soffer <nsoffer@redhat.com> wrote:
Hi Gianluca,
You can run this on the host:
$ python -c "import yaml; print 'CLoader:', hasattr(yaml, 'CLoader')" CLoader: True
If you get "CLoader: False", you have some packaging issue, CLoader is available on all supported platforms.
Nir
Thanks, Gianluca
It seems ok.
[root@ovirt01 ovirt-hosted-engine-ha]# python -c "import yaml; print 'CLoader:', hasattr(yaml, 'CLoader')" CLoader: True [root@ovirt01 ovirt-hosted-engine-ha]#
Anyway see here a sample of the spikes that it cntinues to have.. from 15% to 55% many times https://drive.google.com/file/d/0BwoPbcrMv8mvMy1xVUE3YzI2YVE/ view?usp=sharing
There are two issues in this video: - Memory leak, ovirt-ha-agent is using 1g of memory. It is very unlikely that it needs so much memory. - Unusual cpu usage - but not the kind of usage related to yaml parsing.
I would open two bugs for this. We have seen the first issue few month ago, and we did nothing about it so the memory leak was not fixed.
To understand the unusual cpu usage, we need to integrate yappi into ovirt-ha-agent, and do some profiling to understand where cpu time is spent.
Simone, can you do something based on these patches? https://gerrit.ovirt.org/#/q/topic:generic-profiler
I hope to get these patches merged soon.
Absolutely at this point.
Nir
The host is an Intel NUC6i5 with 32Gb of ram. There are the engine, an F25 guest and a C7 desktop VMs running, without doing almost anything.
Gianluca

On Wed, Apr 26, 2017 at 1:28 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Apr 26, 2017 at 12:52 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Apr 26, 2017 at 11:36 AM Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Tue, Apr 25, 2017 at 11:28 PM, Nir Soffer <nsoffer@redhat.com> wrote:
Hi Gianluca,
You can run this on the host:
$ python -c "import yaml; print 'CLoader:', hasattr(yaml, 'CLoader')" CLoader: True
If you get "CLoader: False", you have some packaging issue, CLoader is available on all supported platforms.
Nir
Thanks, Gianluca
It seems ok.
[root@ovirt01 ovirt-hosted-engine-ha]# python -c "import yaml; print 'CLoader:', hasattr(yaml, 'CLoader')" CLoader: True [root@ovirt01 ovirt-hosted-engine-ha]#
Anyway see here a sample of the spikes that it cntinues to have.. from 15% to 55% many times https://drive.google.com/file/d/0BwoPbcrMv8mvMy1xVUE3YzI2YVE /view?usp=sharing
There are two issues in this video: - Memory leak, ovirt-ha-agent is using 1g of memory. It is very unlikely that it needs so much memory. - Unusual cpu usage - but not the kind of usage related to yaml parsing.
I would open two bugs for this. We have seen the first issue few month ago, and we did nothing about it so the memory leak was not fixed.
To understand the unusual cpu usage, we need to integrate yappi into ovirt-ha-agent, and do some profiling to understand where cpu time is spent.
Simone, can you do something based on these patches? https://gerrit.ovirt.org/#/q/topic:generic-profiler
I hope to get these patches merged soon.
Absolutely at this point.
On 4.1.1, the 96% of the cpu time of ovirt-ha-agent is still spent in connect() in /usr/lib/python2.7/site-packages/vdsm/jsonrpcvdscli.py and the 95.98% is in Schema.__init__ in /usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py So it's still the parsing of the api yaml schema. On master we already merged a patch to reuse an existing connection if available and this should mitigate/resolve the issue: https://gerrit.ovirt.org/73757/ It's still not that clear why we are facing this kind of performance regression.
Nir
The host is an Intel NUC6i5 with 32Gb of ram. There are the engine, an F25 guest and a C7 desktop VMs running, without doing almost anything.
Gianluca

On Wed, Apr 26, 2017 at 4:28 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On 4.1.1, the 96% of the cpu time of ovirt-ha-agent is still spent in connect() in /usr/lib/python2.7/site-packages/vdsm/jsonrpcvdscli.py and the 95.98% is in Schema.__init__ in /usr/lib/python2.7/site- packages/vdsm/api/vdsmapi.py
So it's still the parsing of the api yaml schema. On master we already merged a patch to reuse an existing connection if available and this should mitigate/resolve the issue: https://gerrit.ovirt.org/73757/
It's still not that clear why we are facing this kind of performance regression.
Does this mean that I could try to patch the ovirt_hosted_engine_ha/lib/util.py file also in my 4.1.1 install and verify if it solves...? Or do I have to wait an official patched rpm? Gianluca

On Wed, Apr 26, 2017 at 4:52 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Wed, Apr 26, 2017 at 4:28 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On 4.1.1, the 96% of the cpu time of ovirt-ha-agent is still spent in connect() in /usr/lib/python2.7/site-packages/vdsm/jsonrpcvdscli.py and the 95.98% is in Schema.__init__ in /usr/lib/python2.7/site-pac kages/vdsm/api/vdsmapi.py
So it's still the parsing of the api yaml schema. On master we already merged a patch to reuse an existing connection if available and this should mitigate/resolve the issue: https://gerrit.ovirt.org/73757/
It's still not that clear why we are facing this kind of performance regression.
Does this mean that I could try to patch the ovirt_hosted_engine_ha/lib/util.py file also in my 4.1.1 install and verify if it solves...?
Unfortunately it's not enough by itself since it also requires https://gerrit.ovirt.org/#/c/58029/
Or do I have to wait an official patched rpm?
Gianluca

On Wed, Apr 26, 2017 at 4:54 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Apr 26, 2017 at 4:52 PM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Wed, Apr 26, 2017 at 4:28 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On 4.1.1, the 96% of the cpu time of ovirt-ha-agent is still spent in connect() in /usr/lib/python2.7/site-packages/vdsm/jsonrpcvdscli.py and the 95.98% is in Schema.__init__ in /usr/lib/python2.7/site-pac kages/vdsm/api/vdsmapi.py
So it's still the parsing of the api yaml schema. On master we already merged a patch to reuse an existing connection if available and this should mitigate/resolve the issue: https://gerrit.ovirt.org/73757/
It's still not that clear why we are facing this kind of performance regression.
Does this mean that I could try to patch the ovirt_hosted_engine_ha/lib/util.py file also in my 4.1.1 install and verify if it solves...?
Unfortunately it's not enough by itself since it also requires https://gerrit.ovirt.org/#/c/58029/
Or do I have to wait an official patched rpm?
Gianluca
And what if I change all the 3 files involved in the 2 gerrit entries: ovirt_hosted_engine_ha/lib/util.py lib/yajsonrpc/stomp.py lib/yajsonrpc/stompreactor.py ?

On Wed, Apr 26, 2017 at 6:14 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Wed, Apr 26, 2017 at 4:54 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Apr 26, 2017 at 4:52 PM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
On Wed, Apr 26, 2017 at 4:28 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On 4.1.1, the 96% of the cpu time of ovirt-ha-agent is still spent in connect() in /usr/lib/python2.7/site-packages/vdsm/jsonrpcvdscli.py and the 95.98% is in Schema.__init__ in /usr/lib/python2.7/site-pac kages/vdsm/api/vdsmapi.py
So it's still the parsing of the api yaml schema. On master we already merged a patch to reuse an existing connection if available and this should mitigate/resolve the issue: https://gerrit.ovirt.org/73757/
It's still not that clear why we are facing this kind of performance regression.
Does this mean that I could try to patch the ovirt_hosted_engine_ha/lib/util.py file also in my 4.1.1 install and verify if it solves...?
Unfortunately it's not enough by itself since it also requires https://gerrit.ovirt.org/#/c/58029/
Or do I have to wait an official patched rpm?
Gianluca
And what if I change all the 3 files involved in the 2 gerrit entries: ovirt_hosted_engine_ha/lib/util.py lib/yajsonrpc/stomp.py lib/yajsonrpc/stompreactor.py
?
Yes, it should work although we never officially back-ported and tested it for 4.1.z.

On 7 Oct 2016, at 14:59, Nir Soffer <nsoffer@redhat.com> wrote: =20 On Fri, Oct 7, 2016 at 3:52 PM, Michal Skrivanek = <michal.skrivanek@redhat.com <mailto:michal.skrivanek@redhat.com>> = wrote: =20
On 7 Oct 2016, at 14:42, Nir Soffer <nsoffer@redhat.com = <mailto:nsoffer@redhat.com>> wrote: =20 On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi = <stirabos@redhat.com <mailto:stirabos@redhat.com>> wrote: =20 =20 On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer <nsoffer@redhat.com = <mailto:nsoffer@redhat.com>> wrote: On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi = <stirabos@redhat.com <mailto:stirabos@redhat.com>> wrote: =20 =20 On Wed, Oct 5, 2016 at 9:17 AM, gregor <gregor_forum@catrix.at = <mailto:gregor_forum@catrix.at>> wrote: Hi, =20 did you found a solution or cause for this high CPU usage? I have installed the self hosted engine on another server and there = is no VM running but ovirt-ha-agent uses heavily the CPU. =20 Yes, it's due to the fact that ovirt-ha-agent periodically reconnects = over json rpc and this is CPU intensive since the client has to parse =
--Apple-Mail=_7AA15767-F85A-473B-A84A-0184BA1DE5EB Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 the yaml API specification each time it connects.
=20 wasn=E2=80=99t it suppose to be fixed to reuse the connection? Like = all the other clients (vdsm migration code:-)=20 =20 This is orthogonal issue.
Yes it is. And that=E2=80=99s the issue;-) Both are wrong, but by =E2=80=9Cfixing=E2=80=9D the schema validation = only you lose the motivation to fix the meaningless wasteful reconnect
=20 Does schema validation matter then if there would be only one = connection at the start up? =20 Loading once does not help command line tools like vdsClient, = hosted-engine and vdsm-tool.=20
=20 Nir =20 =20
=20 Simone, reusing the connection is good idea anyway, but what you = describe is=20 a bug in the client library. The library does *not* need to load and =
none of the other tools is using json-rpc. parse the
schema at all for sending requests to vdsm. =20 The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library. =20 Please file an infra bug about it. =20 Done, https://bugzilla.redhat.com/show_bug.cgi?id=3D1381899 = <https://bugzilla.redhat.com/show_bug.cgi?id=3D1381899> =20 Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230 <https://gerrit.ovirt.org/65230> =20 Would be nice if it can be tested on the system showing this problem. =20 Cheers, Nir _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users = <http://lists.ovirt.org/mailman/listinfo/users> =20 =20
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
<br class=3D""></div>none of the other tools is using = json-rpc.</div><div><br class=3D""><blockquote type=3D"cite" = class=3D""><div class=3D""><div dir=3D"ltr" class=3D""><div = class=3D"gmail_extra"><div class=3D"gmail_quote"><div class=3D""><br = class=3D""></div><div class=3D"">Nir</div><div = class=3D""> </div><blockquote class=3D"gmail_quote" style=3D"margin:0= 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div =
<div class=3D""><br class=3D""></div><div class=3D"">Here is a patch =
--Apple-Mail=_7AA15767-F85A-473B-A84A-0184BA1DE5EB Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><br class=3D""><div><blockquote type=3D"cite" class=3D""><div = class=3D"">On 7 Oct 2016, at 14:59, Nir Soffer <<a = href=3D"mailto:nsoffer@redhat.com" class=3D"">nsoffer@redhat.com</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote">On Fri, Oct 7, 2016 at 3:52 PM, Michal Skrivanek = <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:michal.skrivanek@redhat.com" target=3D"_blank" = class=3D"">michal.skrivanek@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex"><div = style=3D"word-wrap:break-word" class=3D""><br class=3D""><div = class=3D""><span class=3D""><blockquote type=3D"cite" class=3D""><div = class=3D"">On 7 Oct 2016, at 14:42, Nir Soffer <<a = href=3D"mailto:nsoffer@redhat.com" target=3D"_blank" = class=3D"">nsoffer@redhat.com</a>> wrote:</div><br class=3D""><div = class=3D""><div dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote">On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi = <span dir=3D"ltr" class=3D""><<a href=3D"mailto:stirabos@redhat.com" = target=3D"_blank" class=3D"">stirabos@redhat.com</a>></span> = wrote:<br class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px= 0px 0px 0.8ex;border-left:1px solid = rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr" class=3D""><br = class=3D""><div class=3D"gmail_extra"><br class=3D""><div = class=3D"gmail_quote"><span class=3D"">On Wed, Oct 5, 2016 at 10:34 AM, = Nir Soffer <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:nsoffer@redhat.com" target=3D"_blank" = class=3D"">nsoffer@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><span class=3D"">On Wed, Oct 5, 2016 at 10:24 AM, = Simone Tiraboschi <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:stirabos@redhat.com" target=3D"_blank" = class=3D"">stirabos@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div = dir=3D"ltr" class=3D""><br class=3D""><div class=3D"gmail_extra"><br = class=3D""><div class=3D"gmail_quote"><span class=3D"">On Wed, Oct 5, = 2016 at 9:17 AM, gregor <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:gregor_forum@catrix.at" target=3D"_blank" = class=3D"">gregor_forum@catrix.at</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br = class=3D""> <br class=3D""> did you found a solution or cause for this high CPU usage?<br class=3D""> I have installed the self hosted engine on another server and there = is<br class=3D""> no VM running but ovirt-ha-agent uses heavily the CPU.<br = class=3D""></blockquote><div class=3D""><br class=3D""></div></span>Yes, = it's due to the fact that ovirt-ha-agent periodically reconnects over = json rpc and this is CPU intensive since the client has to parse the = yaml API specification each time it connects.<br = class=3D""></div></div></div></blockquote></span></div></div></div></block= quote></span></div></div></div></blockquote></div></div></div></div></bloc= kquote><div class=3D""><br class=3D""></div></span>wasn=E2=80=99t it = suppose to be fixed to reuse the connection? Like all the other clients = (vdsm migration code:-) </div></div></blockquote><div class=3D""><br = class=3D""></div><div class=3D"">This is orthogonal = issue.</div></div></div></div></div></blockquote><div><br = class=3D""></div>Yes it is. And that=E2=80=99s the = issue;-)</div><div>Both are wrong, but by =E2=80=9Cfixing=E2=80=9D the = schema validation only you lose the motivation to fix the meaningless = wasteful reconnect</div><div><br class=3D""><blockquote type=3D"cite" = class=3D""><div class=3D""><div dir=3D"ltr" class=3D""><div = class=3D"gmail_extra"><div class=3D"gmail_quote"><div = class=3D""> </div><blockquote class=3D"gmail_quote" style=3D"margin:0= 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div = style=3D"word-wrap:break-word" class=3D""><div class=3D"">Does schema = validation matter then if there would be only one connection at the = start up?</div></div></blockquote><div class=3D""><br = class=3D""></div><div class=3D"">Loading once does not help command line = tools like vdsClient, hosted-engine and</div><div = class=3D"">vdsm-tool. </div></div></div></div></div></blockquote><div= style=3D"word-wrap:break-word" class=3D""><div class=3D""><br = class=3D""><blockquote type=3D"cite" class=3D""><div class=3D""><span = class=3D""><div dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><blockquote class=3D"gmail_quote" = style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid = rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr" class=3D""><div = class=3D"gmail_extra"><div class=3D"gmail_quote"><span = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><span class=3D""><div class=3D""><br = class=3D""></div></span><div class=3D"">Simone, reusing the connection = is good idea anyway, but what you describe is </div><div class=3D"">a= bug in the client library. The library does *not* need to load and = parse the</div><div class=3D"">schema at all for sending requests to = vdsm.</div><div class=3D""><br class=3D""></div><div class=3D"">The = schema is only needed if you want to verify request = parameters,</div><div class=3D"">or provide online help, these are not = needed in a client library.</div><div class=3D""><br class=3D""></div><div= class=3D"">Please file an infra bug about = it.</div></div></div></div></blockquote><div class=3D""><br = class=3D""></div></span><div class=3D"">Done, <a = href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1381899" = target=3D"_blank" class=3D"">https://bugzilla.redhat.com/sh<wbr = class=3D"">ow_bug.cgi?id=3D1381899</a></div></div></div></div></blockquote= that should eliminate most most of the problem:</div><div class=3D""><a = href=3D"https://gerrit.ovirt.org/65230" target=3D"_blank" = class=3D"">https://gerrit.ovirt.org/65230</a><br class=3D""></div><div = class=3D""><br class=3D""></div><div class=3D"">Would be nice if it can = be tested on the system showing this problem.</div><div class=3D""><br = class=3D""></div><div class=3D"">Cheers,</div><div = class=3D"">Nir</div></div></div></div></span><span class=3D""> ______________________________<wbr class=3D"">_________________<br = class=3D"">Users mailing list<br class=3D""><a = href=3D"mailto:Users@ovirt.org" target=3D"_blank" = class=3D"">Users@ovirt.org</a><br class=3D""><a = href=3D"http://lists.ovirt.org/mailman/listinfo/users" target=3D"_blank" = class=3D"">http://lists.ovirt.org/<wbr = class=3D"">mailman/listinfo/users</a><br = class=3D""></span></div></blockquote></div><br = class=3D""></div></blockquote></div><br class=3D""></div></div> _______________________________________________<br class=3D"">Users = mailing list<br class=3D""><a href=3D"mailto:Users@ovirt.org" = class=3D"">Users@ovirt.org</a><br = class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br = class=3D""></div></blockquote></div><br class=3D""></body></html>= --Apple-Mail=_7AA15767-F85A-473B-A84A-0184BA1DE5EB--

On Fri, Oct 7, 2016 at 3:25 PM, Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 7 Oct 2016, at 14:59, Nir Soffer <nsoffer@redhat.com> wrote:
On Fri, Oct 7, 2016 at 3:52 PM, Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 7 Oct 2016, at 14:42, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi <stirabos@redhat.com
wrote:
On Wed, Oct 5, 2016 at 9:17 AM, gregor <gregor_forum@catrix.at> wrote:
Hi,
did you found a solution or cause for this high CPU usage? I have installed the self hosted engine on another server and there is no VM running but ovirt-ha-agent uses heavily the CPU.
Yes, it's due to the fact that ovirt-ha-agent periodically reconnects over json rpc and this is CPU intensive since the client has to parse the yaml API specification each time it connects.
wasn’t it suppose to be fixed to reuse the connection? Like all the other clients (vdsm migration code:-)
This is orthogonal issue.
Yes it is. And that’s the issue;-) Both are wrong, but by “fixing” the schema validation only you lose the motivation to fix the meaningless wasteful reconnect
Yes, we are going to fix that too ( https://bugzilla.redhat.com/show_bug.cgi?id=1349829 ) but it would require also https://bugzilla.redhat.com/show_bug.cgi?id=1376843 to be fixed.
Does schema validation matter then if there would be only one connection at the start up?
Loading once does not help command line tools like vdsClient, hosted-engine and vdsm-tool.
none of the other tools is using json-rpc.
hosted-engine-setup is, and sooner or later we'll have to migrate also the remaining tools since xmlrpc has been deprecated with 4.0
Nir
Simone, reusing the connection is good idea anyway, but what you describe is a bug in the client library. The library does *not* need to load and parse the schema at all for sending requests to vdsm.
The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library.
Please file an infra bug about it.
Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230
Would be nice if it can be tested on the system showing this problem.
Cheers, Nir _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--Apple-Mail=_B5E3106A-94F6-40B0-A9A6-4DD310F17BA3 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8
On 7 Oct 2016, at 15:28, Simone Tiraboschi <stirabos@redhat.com> = wrote: =20 =20 =20 On Fri, Oct 7, 2016 at 3:25 PM, Michal Skrivanek = <michal.skrivanek@redhat.com <mailto:michal.skrivanek@redhat.com>> = wrote: =20
On 7 Oct 2016, at 14:59, Nir Soffer <nsoffer@redhat.com = <mailto:nsoffer@redhat.com>> wrote: =20 On Fri, Oct 7, 2016 at 3:52 PM, Michal Skrivanek = <michal.skrivanek@redhat.com <mailto:michal.skrivanek@redhat.com>> = wrote: =20
On 7 Oct 2016, at 14:42, Nir Soffer <nsoffer@redhat.com = <mailto:nsoffer@redhat.com>> wrote: =20 On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi = <stirabos@redhat.com <mailto:stirabos@redhat.com>> wrote: =20 =20 On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer <nsoffer@redhat.com = <mailto:nsoffer@redhat.com>> wrote: On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi = <stirabos@redhat.com <mailto:stirabos@redhat.com>> wrote: =20 =20 On Wed, Oct 5, 2016 at 9:17 AM, gregor <gregor_forum@catrix.at = <mailto:gregor_forum@catrix.at>> wrote: Hi, =20 did you found a solution or cause for this high CPU usage? I have installed the self hosted engine on another server and there = is no VM running but ovirt-ha-agent uses heavily the CPU. =20 Yes, it's due to the fact that ovirt-ha-agent periodically = reconnects over json rpc and this is CPU intensive since the client has = to parse the yaml API specification each time it connects. =20 wasn=E2=80=99t it suppose to be fixed to reuse the connection? Like = all the other clients (vdsm migration code:-)=20 =20 This is orthogonal issue. =20 Yes it is. And that=E2=80=99s the issue;-) Both are wrong, but by =E2=80=9Cfixing=E2=80=9D the schema validation = only you lose the motivation to fix the meaningless wasteful reconnect =20 Yes, we are going to fix that too ( = https://bugzilla.redhat.com/show_bug.cgi?id=3D1349829 = <https://bugzilla.redhat.com/show_bug.cgi?id=3D1349829> )
that=E2=80=99s great! Also al the other vdsClient uses?:-) What is that periodic one call anyway? Is there only one? Maybe we = don=E2=80=99t need it so much.
but it would require also = https://bugzilla.redhat.com/show_bug.cgi?id=3D1376843 = <https://bugzilla.redhat.com/show_bug.cgi?id=3D1376843> to be fixed.
=20 =20
=20 Does schema validation matter then if there would be only one = connection at the start up? =20 Loading once does not help command line tools like vdsClient, = hosted-engine and vdsm-tool.=20 =20 none of the other tools is using json-rpc. =20 hosted-engine-setup is, and sooner or later we'll have to migrate also =
=20 =20
=20 Nir =20 =20
=20 Simone, reusing the connection is good idea anyway, but what you = describe is=20 a bug in the client library. The library does *not* need to load and =
schema at all for sending requests to vdsm. =20 The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library. =20 Please file an infra bug about it. =20 Done, https://bugzilla.redhat.com/show_bug.cgi?id=3D1381899 = <https://bugzilla.redhat.com/show_bug.cgi?id=3D1381899> =20 Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230 <https://gerrit.ovirt.org/65230> =20 Would be nice if it can be tested on the system showing this =
This is less good. Well, worst case you can reconnect yourself, all you = need is a notification when the existing connection breaks the remaining tools since xmlrpc has been deprecated with 4.0 ok. though setup is a one-time action so it=E2=80=99s not an issue there parse the problem.
=20 Cheers, Nir _______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users = <http://lists.ovirt.org/mailman/listinfo/users> =20 =20
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users = <http://lists.ovirt.org/mailman/listinfo/users> =20 =20
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users = <http://lists.ovirt.org/mailman/listinfo/users> =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
<div class=3D""><br class=3D""></div><div class=3D"">Here is a patch =
--Apple-Mail=_B5E3106A-94F6-40B0-A9A6-4DD310F17BA3 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><br class=3D""><div><blockquote type=3D"cite" class=3D""><div = class=3D"">On 7 Oct 2016, at 15:28, Simone Tiraboschi <<a = href=3D"mailto:stirabos@redhat.com" class=3D"">stirabos@redhat.com</a>>= wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><div = dir=3D"ltr" class=3D""><br class=3D""><div class=3D"gmail_extra"><br = class=3D""><div class=3D"gmail_quote">On Fri, Oct 7, 2016 at 3:25 PM, = Michal Skrivanek <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:michal.skrivanek@redhat.com" target=3D"_blank" = class=3D"">michal.skrivanek@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div = style=3D"word-wrap:break-word" class=3D""><br class=3D""><div = class=3D""><span class=3D"gmail-"><blockquote type=3D"cite" = class=3D""><div class=3D"">On 7 Oct 2016, at 14:59, Nir Soffer <<a = href=3D"mailto:nsoffer@redhat.com" target=3D"_blank" = class=3D"">nsoffer@redhat.com</a>> wrote:</div><br class=3D""><div = class=3D""><div dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote">On Fri, Oct 7, 2016 at 3:52 PM, Michal Skrivanek = <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:michal.skrivanek@redhat.com" target=3D"_blank" = class=3D"">michal.skrivanek@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div = style=3D"word-wrap:break-word" class=3D""><br class=3D""><div = class=3D""><span class=3D""><blockquote type=3D"cite" class=3D""><div = class=3D"">On 7 Oct 2016, at 14:42, Nir Soffer <<a = href=3D"mailto:nsoffer@redhat.com" target=3D"_blank" = class=3D"">nsoffer@redhat.com</a>> wrote:</div><br class=3D""><div = class=3D""><div dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote">On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi = <span dir=3D"ltr" class=3D""><<a href=3D"mailto:stirabos@redhat.com" = target=3D"_blank" class=3D"">stirabos@redhat.com</a>></span> = wrote:<br class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px= 0px 0px 0.8ex;border-left:1px solid = rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr" class=3D""><br = class=3D""><div class=3D"gmail_extra"><br class=3D""><div = class=3D"gmail_quote"><span class=3D"">On Wed, Oct 5, 2016 at 10:34 AM, = Nir Soffer <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:nsoffer@redhat.com" target=3D"_blank" = class=3D"">nsoffer@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><span class=3D"">On Wed, Oct 5, 2016 at 10:24 AM, = Simone Tiraboschi <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:stirabos@redhat.com" target=3D"_blank" = class=3D"">stirabos@redhat.com</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div = dir=3D"ltr" class=3D""><br class=3D""><div class=3D"gmail_extra"><br = class=3D""><div class=3D"gmail_quote"><span class=3D"">On Wed, Oct 5, = 2016 at 9:17 AM, gregor <span dir=3D"ltr" class=3D""><<a = href=3D"mailto:gregor_forum@catrix.at" target=3D"_blank" = class=3D"">gregor_forum@catrix.at</a>></span> wrote:<br = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br = class=3D""> <br class=3D""> did you found a solution or cause for this high CPU usage?<br class=3D""> I have installed the self hosted engine on another server and there = is<br class=3D""> no VM running but ovirt-ha-agent uses heavily the CPU.<br = class=3D""></blockquote><div class=3D""><br class=3D""></div></span>Yes, = it's due to the fact that ovirt-ha-agent periodically reconnects over = json rpc and this is CPU intensive since the client has to parse the = yaml API specification each time it connects.<br = class=3D""></div></div></div></blockquote></span></div></div></div></block= quote></span></div></div></div></blockquote></div></div></div></div></bloc= kquote><div class=3D""><br class=3D""></div></span>wasn=E2=80=99t it = suppose to be fixed to reuse the connection? Like all the other clients = (vdsm migration code:-) </div></div></blockquote><div class=3D""><br = class=3D""></div><div class=3D"">This is orthogonal = issue.</div></div></div></div></div></blockquote><div class=3D""><br = class=3D""></div></span>Yes it is. And that=E2=80=99s the = issue;-)</div><div class=3D"">Both are wrong, but by =E2=80=9Cfixing=E2=80= =9D the schema validation only you lose the motivation to fix the = meaningless wasteful reconnect</div></div></blockquote><div class=3D""><br= class=3D""></div><div class=3D"">Yes, we are going to fix that too ( <a = href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1349829" = class=3D"">https://bugzilla.redhat.com/show_bug.cgi?id=3D1349829</a> = ) </div></div></div></div></div></blockquote><div><br = class=3D""></div>that=E2=80=99s great! Also al the other vdsClient = uses?:-)</div><div>What is that periodic one call anyway? Is there only = one? Maybe we don=E2=80=99t need it so much.</div><div><br = class=3D""><blockquote type=3D"cite" class=3D""><div class=3D""><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><div class=3D"">but it would require also <a = href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1376843" = class=3D"">https://bugzilla.redhat.com/show_bug.cgi?id=3D1376843</a> = to be fixed.</div></div></div></div></div></blockquote><div><br = class=3D""></div>This is less good. Well, worst case you can reconnect = yourself, all you need is a notification when the existing connection = breaks</div><div><br class=3D""><blockquote type=3D"cite" class=3D""><div = class=3D""><div dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><div class=3D""> </div><blockquote = class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px = solid rgb(204,204,204);padding-left:1ex"><div = style=3D"word-wrap:break-word" class=3D""><div class=3D""><span = class=3D"gmail-"><br class=3D""><blockquote type=3D"cite" class=3D""><div = class=3D""><div dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><div class=3D""> </div><blockquote = class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px = solid rgb(204,204,204);padding-left:1ex"><div = style=3D"word-wrap:break-word" class=3D""><div class=3D"">Does schema = validation matter then if there would be only one connection at the = start up?</div></div></blockquote><div class=3D""><br = class=3D""></div><div class=3D"">Loading once does not help command line = tools like vdsClient, hosted-engine and</div><div = class=3D"">vdsm-tool. </div></div></div></div></div></blockquote><div= class=3D""><br class=3D""></div></span>none of the other tools is using = json-rpc.</div></div></blockquote><div class=3D""><br = class=3D""></div><div class=3D"">hosted-engine-setup is, and sooner or = later we'll have to migrate also the remaining tools since xmlrpc has = been deprecated with = 4.0</div></div></div></div></div></blockquote><div><br = class=3D""></div>ok. though setup is a one-time action so it=E2=80=99s = not an issue there</div><div><br class=3D""><blockquote type=3D"cite" = class=3D""><div class=3D""><div dir=3D"ltr" class=3D""><div = class=3D"gmail_extra"><div class=3D"gmail_quote"><div = class=3D""> </div><blockquote class=3D"gmail_quote" = style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid = rgb(204,204,204);padding-left:1ex"><div style=3D"word-wrap:break-word" = class=3D""><span class=3D"gmail-"><div class=3D""><br = class=3D""><blockquote type=3D"cite" class=3D""><div class=3D""><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><div class=3D""><br class=3D""></div><div = class=3D"">Nir</div><div class=3D""> </div><blockquote = class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px = solid rgb(204,204,204);padding-left:1ex"><div = style=3D"word-wrap:break-word" class=3D""><div class=3D""><br = class=3D""><blockquote type=3D"cite" class=3D""><div class=3D""><span = class=3D""><div dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><blockquote class=3D"gmail_quote" = style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid = rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr" class=3D""><div = class=3D"gmail_extra"><div class=3D"gmail_quote"><span = class=3D""><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px = 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div = dir=3D"ltr" class=3D""><div class=3D"gmail_extra"><div = class=3D"gmail_quote"><span class=3D""><div class=3D""><br = class=3D""></div></span><div class=3D"">Simone, reusing the connection = is good idea anyway, but what you describe is </div><div class=3D"">a= bug in the client library. The library does *not* need to load and = parse the</div><div class=3D"">schema at all for sending requests to = vdsm.</div><div class=3D""><br class=3D""></div><div class=3D"">The = schema is only needed if you want to verify request = parameters,</div><div class=3D"">or provide online help, these are not = needed in a client library.</div><div class=3D""><br class=3D""></div><div= class=3D"">Please file an infra bug about = it.</div></div></div></div></blockquote><div class=3D""><br = class=3D""></div></span><div class=3D"">Done, <a = href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1381899" = target=3D"_blank" class=3D"">https://bugzilla.redhat.com/sh<wbr = class=3D"">ow_bug.cgi?id=3D1381899</a></div></div></div></div></blockquote= that should eliminate most most of the problem:</div><div class=3D""><a = href=3D"https://gerrit.ovirt.org/65230" target=3D"_blank" = class=3D"">https://gerrit.ovirt.org/65230</a><br class=3D""></div><div = class=3D""><br class=3D""></div><div class=3D"">Would be nice if it can = be tested on the system showing this problem.</div><div class=3D""><br = class=3D""></div><div class=3D"">Cheers,</div><div = class=3D"">Nir</div></div></div></div></span><span class=3D""> ______________________________<wbr class=3D"">_________________<br = class=3D"">Users mailing list<br class=3D""><a = href=3D"mailto:Users@ovirt.org" target=3D"_blank" = class=3D"">Users@ovirt.org</a><br class=3D""><a = href=3D"http://lists.ovirt.org/mailman/listinfo/users" target=3D"_blank" = class=3D"">http://lists.ovirt.org/mailman<wbr = class=3D"">/listinfo/users</a><br = class=3D""></span></div></blockquote></div><br = class=3D""></div></blockquote></div><br class=3D""></div></div> ______________________________<wbr class=3D"">_________________<br = class=3D"">Users mailing list<br class=3D""><a = href=3D"mailto:Users@ovirt.org" target=3D"_blank" = class=3D"">Users@ovirt.org</a><br class=3D""><a = href=3D"http://lists.ovirt.org/mailman/listinfo/users" target=3D"_blank" = class=3D"">http://lists.ovirt.org/<wbr = class=3D"">mailman/listinfo/users</a><br = class=3D""></div></blockquote></div><br class=3D""></span></div><br = class=3D"">______________________________<wbr = class=3D"">_________________<br class=3D""> Users mailing list<br class=3D""> <a href=3D"mailto:Users@ovirt.org" class=3D"">Users@ovirt.org</a><br = class=3D""> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" = rel=3D"noreferrer" target=3D"_blank" = class=3D"">http://lists.ovirt.org/<wbr = class=3D"">mailman/listinfo/users</a><br class=3D""> <br class=3D""></blockquote></div><br class=3D""></div></div> _______________________________________________<br class=3D"">Users = mailing list<br class=3D""><a href=3D"mailto:Users@ovirt.org" = class=3D"">Users@ovirt.org</a><br = class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br = class=3D""></div></blockquote></div><br class=3D""></body></html>= --Apple-Mail=_B5E3106A-94F6-40B0-A9A6-4DD310F17BA3--

On Fri, Oct 7, 2016 at 4:02 PM, Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 7 Oct 2016, at 15:28, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Fri, Oct 7, 2016 at 3:25 PM, Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 7 Oct 2016, at 14:59, Nir Soffer <nsoffer@redhat.com> wrote:
On Fri, Oct 7, 2016 at 3:52 PM, Michal Skrivanek < michal.skrivanek@redhat.com> wrote:
On 7 Oct 2016, at 14:42, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi <stirabos@redhat.com> wrote:
On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi < stirabos@redhat.com> wrote:
On Wed, Oct 5, 2016 at 9:17 AM, gregor <gregor_forum@catrix.at> wrote:
> Hi, > > did you found a solution or cause for this high CPU usage? > I have installed the self hosted engine on another server and there > is > no VM running but ovirt-ha-agent uses heavily the CPU. >
Yes, it's due to the fact that ovirt-ha-agent periodically reconnects over json rpc and this is CPU intensive since the client has to parse the yaml API specification each time it connects.
wasn’t it suppose to be fixed to reuse the connection? Like all the other clients (vdsm migration code:-)
This is orthogonal issue.
Yes it is. And that’s the issue;-) Both are wrong, but by “fixing” the schema validation only you lose the motivation to fix the meaningless wasteful reconnect
Yes, we are going to fix that too ( https://bugzilla.redhat.com/ show_bug.cgi?id=1349829 )
that’s great! Also al the other vdsClient uses?:-)
https://gerrit.ovirt.org/#/c/62729/
What is that periodic one call anyway? Is there only one? Maybe we don’t need it so much.
Currently ovirt-ha-agent is periodically reconnecting the hosted-engine storage domain and checking its status. This is already on jsonrpc. In 4.1 all the monitoring will be moved to jsonrpc.
but it would require also https://bugzilla.redhat.com/ show_bug.cgi?id=1376843 to be fixed.
This is less good. Well, worst case you can reconnect yourself, all you need is a notification when the existing connection breaks
Does schema validation matter then if there would be only one connection at the start up?
Loading once does not help command line tools like vdsClient, hosted-engine and vdsm-tool.
none of the other tools is using json-rpc.
hosted-engine-setup is, and sooner or later we'll have to migrate also the remaining tools since xmlrpc has been deprecated with 4.0
ok. though setup is a one-time action so it’s not an issue there
Nir
Simone, reusing the connection is good idea anyway, but what you describe is a bug in the client library. The library does *not* need to load and parse the schema at all for sending requests to vdsm.
The schema is only needed if you want to verify request parameters, or provide online help, these are not needed in a client library.
Please file an infra bug about it.
Here is a patch that should eliminate most most of the problem: https://gerrit.ovirt.org/65230
Would be nice if it can be tested on the system showing this problem.
Cheers, Nir _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--Apple-Mail-E4495591-332A-4D66-8A14-9A9B083028DA Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: base64 DQoNCj4gT24gMDcgT2N0IDIwMTYsIGF0IDE2OjEwLCBTaW1vbmUgVGlyYWJvc2NoaSA8c3RpcmFi b3NAcmVkaGF0LmNvbT4gd3JvdGU6DQo+IA0KPiANCj4gDQo+PiBPbiBGcmksIE9jdCA3LCAyMDE2 IGF0IDQ6MDIgUE0sIE1pY2hhbCBTa3JpdmFuZWsgPG1pY2hhbC5za3JpdmFuZWtAcmVkaGF0LmNv bT4gd3JvdGU6DQo+PiANCj4+PiBPbiA3IE9jdCAyMDE2LCBhdCAxNToyOCwgU2ltb25lIFRpcmFi b3NjaGkgPHN0aXJhYm9zQHJlZGhhdC5jb20+IHdyb3RlOg0KPj4+IA0KPj4+IA0KPj4+IA0KPj4+ IE9uIEZyaSwgT2N0IDcsIDIwMTYgYXQgMzoyNSBQTSwgTWljaGFsIFNrcml2YW5layA8bWljaGFs LnNrcml2YW5la0ByZWRoYXQuY29tPiB3cm90ZToNCj4+Pj4gDQo+Pj4+PiBPbiA3IE9jdCAyMDE2 LCBhdCAxNDo1OSwgTmlyIFNvZmZlciA8bnNvZmZlckByZWRoYXQuY29tPiB3cm90ZToNCj4+Pj4+ IA0KPj4+Pj4+IE9uIEZyaSwgT2N0IDcsIDIwMTYgYXQgMzo1MiBQTSwgTWljaGFsIFNrcml2YW5l ayA8bWljaGFsLnNrcml2YW5la0ByZWRoYXQuY29tPiB3cm90ZToNCj4+Pj4+PiANCj4+Pj4+Pj4g T24gNyBPY3QgMjAxNiwgYXQgMTQ6NDIsIE5pciBTb2ZmZXIgPG5zb2ZmZXJAcmVkaGF0LmNvbT4g d3JvdGU6DQo+Pj4+Pj4+IA0KPj4+Pj4+Pj4gT24gV2VkLCBPY3QgNSwgMjAxNiBhdCAxOjMzIFBN LCBTaW1vbmUgVGlyYWJvc2NoaSA8c3RpcmFib3NAcmVkaGF0LmNvbT4gd3JvdGU6DQo+Pj4+Pj4+ PiANCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IE9uIFdlZCwgT2N0IDUsIDIwMTYgYXQgMTA6MzQgQU0s IE5pciBTb2ZmZXIgPG5zb2ZmZXJAcmVkaGF0LmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+IE9uIFdl ZCwgT2N0IDUsIDIwMTYgYXQgMTA6MjQgQU0sIFNpbW9uZSBUaXJhYm9zY2hpIDxzdGlyYWJvc0By ZWRoYXQuY29tPiB3cm90ZToNCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4g T24gV2VkLCBPY3QgNSwgMjAxNiBhdCA5OjE3IEFNLCBncmVnb3IgPGdyZWdvcl9mb3J1bUBjYXRy aXguYXQ+IHdyb3RlOg0KPj4+Pj4+Pj4+Pj4gSGksDQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+ IGRpZCB5b3UgZm91bmQgYSBzb2x1dGlvbiBvciBjYXVzZSBmb3IgdGhpcyBoaWdoIENQVSB1c2Fn ZT8NCj4+Pj4+Pj4+Pj4+IEkgaGF2ZSBpbnN0YWxsZWQgdGhlIHNlbGYgaG9zdGVkIGVuZ2luZSBv biBhbm90aGVyIHNlcnZlciBhbmQgdGhlcmUgaXMNCj4+Pj4+Pj4+Pj4+IG5vIFZNIHJ1bm5pbmcg YnV0IG92aXJ0LWhhLWFnZW50IHVzZXMgaGVhdmlseSB0aGUgQ1BVLg0KPj4+Pj4+Pj4+PiANCj4+ Pj4+Pj4+Pj4gWWVzLCBpdCdzIGR1ZSB0byB0aGUgZmFjdCB0aGF0IG92aXJ0LWhhLWFnZW50IHBl cmlvZGljYWxseSByZWNvbm5lY3RzIG92ZXIganNvbiBycGMgYW5kIHRoaXMgaXMgQ1BVIGludGVu c2l2ZSBzaW5jZSB0aGUgY2xpZW50IGhhcyB0byBwYXJzZSB0aGUgeWFtbCBBUEkgc3BlY2lmaWNh dGlvbiBlYWNoIHRpbWUgaXQgY29ubmVjdHMuDQo+Pj4+Pj4gDQo+Pj4+Pj4gd2FzbuKAmXQgaXQg c3VwcG9zZSB0byBiZSBmaXhlZCB0byByZXVzZSB0aGUgY29ubmVjdGlvbj8gTGlrZSBhbGwgdGhl IG90aGVyIGNsaWVudHMgKHZkc20gbWlncmF0aW9uIGNvZGU6LSkgDQo+Pj4+PiANCj4+Pj4+IFRo aXMgaXMgb3J0aG9nb25hbCBpc3N1ZS4NCj4+Pj4gDQo+Pj4+IFllcyBpdCBpcy4gQW5kIHRoYXTi gJlzIHRoZSBpc3N1ZTstKQ0KPj4+PiBCb3RoIGFyZSB3cm9uZywgYnV0IGJ5IOKAnGZpeGluZ+KA nSB0aGUgc2NoZW1hIHZhbGlkYXRpb24gb25seSB5b3UgbG9zZSB0aGUgbW90aXZhdGlvbiB0byBm aXggdGhlIG1lYW5pbmdsZXNzIHdhc3RlZnVsIHJlY29ubmVjdA0KPj4+IA0KPj4+IFllcywgd2Ug YXJlIGdvaW5nIHRvIGZpeCB0aGF0IHRvbyAoIGh0dHBzOi8vYnVnemlsbGEucmVkaGF0LmNvbS9z aG93X2J1Zy5jZ2k/aWQ9MTM0OTgyOSApDQo+PiANCj4+IHRoYXTigJlzIGdyZWF0ISBBbHNvIGFs IHRoZSBvdGhlciB2ZHNDbGllbnQgdXNlcz86LSkNCj4gDQo+IGh0dHBzOi8vZ2Vycml0Lm92aXJ0 Lm9yZy8jL2MvNjI3MjkvDQoNCkNvb2wuIEl0J3Mgbm90IHNvIGltcG9ydGFudCBmb3Igb25lIHRp bWUgYWN0aW9ucyBidXQgd2UgbmVlZCBpdCB0byBiZSBhYmxlIHRvIGRyb3AgeG1scnBjIGZpbmFs bHksIHNvIGl0IGlzIGltcG9ydGFudDopDQoNCj4gIA0KPj4gV2hhdCBpcyB0aGF0IHBlcmlvZGlj IG9uZSBjYWxsIGFueXdheT8gSXMgdGhlcmUgb25seSBvbmU/IE1heWJlIHdlIGRvbuKAmXQgbmVl ZCBpdCBzbyBtdWNoLg0KPiANCj4gQ3VycmVudGx5IG92aXJ0LWhhLWFnZW50IGlzIHBlcmlvZGlj YWxseSByZWNvbm5lY3RpbmcgdGhlIGhvc3RlZC1lbmdpbmUgc3RvcmFnZSBkb21haW4gYW5kIGNo ZWNraW5nIGl0cyBzdGF0dXMuIFRoaXMgaXMgYWxyZWFkeSBvbiBqc29ucnBjLg0KDQpPay4gQXMg bG9uZyBhcyBpdCBpcyBsb3cgZnJlcXVlbmN5IGl0J3Mgb2suIEp1c3QgYmVhciBpbiBtaW5kIGZv ciBmdXR1cmUgdGhhdCBzb21lIG9mIHRoZSB2ZHNtIGNhbGxzIG1heSBiZSBoZWF2eSBhbmQgaW1w YWN0IHBlcmZvcm1hbmNlLCByZWdhcmRsZXNzIHRoZSBjb21tdW5pY2F0aW9uIGxheWVyLiANCg0K VGhhbmtzIGFuZCBoYXZlIGEgbmljZSB3ZWVrZW5kLA0KbWljaGFsDQoNCj4gSW4gNC4xIGFsbCB0 aGUgbW9uaXRvcmluZyB3aWxsIGJlIG1vdmVkIHRvIGpzb25ycGMuDQo+ICANCj4+IA0KPj4+IGJ1 dCBpdCB3b3VsZCByZXF1aXJlIGFsc28gaHR0cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29tL3Nob3df YnVnLmNnaT9pZD0xMzc2ODQzIHRvIGJlIGZpeGVkLg0KPj4gDQo+PiBUaGlzIGlzIGxlc3MgZ29v ZC4gV2VsbCwgd29yc3QgY2FzZSB5b3UgY2FuIHJlY29ubmVjdCB5b3Vyc2VsZiwgYWxsIHlvdSBu ZWVkIGlzIGEgbm90aWZpY2F0aW9uIHdoZW4gdGhlIGV4aXN0aW5nIGNvbm5lY3Rpb24gYnJlYWtz DQo+PiANCj4+PiAgDQo+Pj4+IA0KPj4+Pj4gIA0KPj4+Pj4+IERvZXMgc2NoZW1hIHZhbGlkYXRp b24gbWF0dGVyIHRoZW4gaWYgdGhlcmUgd291bGQgYmUgb25seSBvbmUgY29ubmVjdGlvbiBhdCB0 aGUgc3RhcnQgdXA/DQo+Pj4+PiANCj4+Pj4+IExvYWRpbmcgb25jZSBkb2VzIG5vdCBoZWxwIGNv bW1hbmQgbGluZSB0b29scyBsaWtlIHZkc0NsaWVudCwgaG9zdGVkLWVuZ2luZSBhbmQNCj4+Pj4+ IHZkc20tdG9vbC4gDQo+Pj4+IA0KPj4+PiBub25lIG9mIHRoZSBvdGhlciB0b29scyBpcyB1c2lu ZyBqc29uLXJwYy4NCj4+PiANCj4+PiBob3N0ZWQtZW5naW5lLXNldHVwIGlzLCBhbmQgc29vbmVy IG9yIGxhdGVyIHdlJ2xsIGhhdmUgdG8gbWlncmF0ZSBhbHNvIHRoZSByZW1haW5pbmcgdG9vbHMg c2luY2UgeG1scnBjIGhhcyBiZWVuIGRlcHJlY2F0ZWQgd2l0aCA0LjANCj4+IA0KPj4gb2suIHRo b3VnaCBzZXR1cCBpcyBhIG9uZS10aW1lIGFjdGlvbiBzbyBpdOKAmXMgbm90IGFuIGlzc3VlIHRo ZXJlDQo+PiANCj4+PiAgDQo+Pj4+IA0KPj4+Pj4gDQo+Pj4+PiBOaXINCj4+Pj4+ICANCj4+Pj4+ PiANCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBTaW1vbmUsIHJldXNpbmcgdGhlIGNvbm5lY3Rpb24g aXMgZ29vZCBpZGVhIGFueXdheSwgYnV0IHdoYXQgeW91IGRlc2NyaWJlIGlzIA0KPj4+Pj4+Pj4+ IGEgYnVnIGluIHRoZSBjbGllbnQgbGlicmFyeS4gVGhlIGxpYnJhcnkgZG9lcyAqbm90KiBuZWVk IHRvIGxvYWQgYW5kIHBhcnNlIHRoZQ0KPj4+Pj4+Pj4+IHNjaGVtYSBhdCBhbGwgZm9yIHNlbmRp bmcgcmVxdWVzdHMgdG8gdmRzbS4NCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBUaGUgc2NoZW1hIGlz IG9ubHkgbmVlZGVkIGlmIHlvdSB3YW50IHRvIHZlcmlmeSByZXF1ZXN0IHBhcmFtZXRlcnMsDQo+ Pj4+Pj4+Pj4gb3IgcHJvdmlkZSBvbmxpbmUgaGVscCwgdGhlc2UgYXJlIG5vdCBuZWVkZWQgaW4g YSBjbGllbnQgbGlicmFyeS4NCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBQbGVhc2UgZmlsZSBhbiBp bmZyYSBidWcgYWJvdXQgaXQuDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IERvbmUsIGh0dHBzOi8vYnVn emlsbGEucmVkaGF0LmNvbS9zaG93X2J1Zy5jZ2k/aWQ9MTM4MTg5OQ0KPj4+Pj4+PiANCj4+Pj4+ Pj4gSGVyZSBpcyBhIHBhdGNoIHRoYXQgc2hvdWxkIGVsaW1pbmF0ZSBtb3N0IG1vc3Qgb2YgdGhl IHByb2JsZW06DQo+Pj4+Pj4+IGh0dHBzOi8vZ2Vycml0Lm92aXJ0Lm9yZy82NTIzMA0KPj4+Pj4+ PiANCj4+Pj4+Pj4gV291bGQgYmUgbmljZSBpZiBpdCBjYW4gYmUgdGVzdGVkIG9uIHRoZSBzeXN0 ZW0gc2hvd2luZyB0aGlzIHByb2JsZW0uDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBDaGVlcnMsDQo+Pj4+ Pj4+IE5pcg0KPj4+Pj4+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fXw0KPj4+Pj4+PiBVc2VycyBtYWlsaW5nIGxpc3QNCj4+Pj4+Pj4gVXNlcnNAb3ZpcnQu b3JnDQo+Pj4+Pj4+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vy cw0KPj4+Pj4gDQo+Pj4+PiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fXw0KPj4+Pj4gVXNlcnMgbWFpbGluZyBsaXN0DQo+Pj4+PiBVc2Vyc0BvdmlydC5vcmcN Cj4+Pj4+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycw0KPj4+ PiANCj4+Pj4gDQo+Pj4+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fDQo+Pj4+IFVzZXJzIG1haWxpbmcgbGlzdA0KPj4+PiBVc2Vyc0BvdmlydC5vcmcNCj4+ Pj4gaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzDQo+Pj4gDQo+ Pj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4+PiBV c2VycyBtYWlsaW5nIGxpc3QNCj4+PiBVc2Vyc0BvdmlydC5vcmcNCj4+PiBodHRwOi8vbGlzdHMu b3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMNCj4gDQo= --Apple-Mail-E4495591-332A-4D66-8A14-9A9B083028DA Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: base64 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPjwvaGVhZD48Ym9keSBkaXI9ImF1dG8iPjxkaXY+PC9kaXY+ PGRpdj48YnI+PC9kaXY+PGRpdj48YnI+T24gMDcgT2N0IDIwMTYsIGF0IDE2OjEwLCBTaW1vbmUg VGlyYWJvc2NoaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnN0aXJhYm9zQHJlZGhhdC5jb20iPnN0aXJh Ym9zQHJlZGhhdC5jb208L2E+Jmd0OyB3cm90ZTo8YnI+PGJyPjwvZGl2PjxibG9ja3F1b3RlIHR5 cGU9ImNpdGUiPjxkaXY+PGRpdiBkaXI9Imx0ciI+PGJyPjxkaXYgY2xhc3M9ImdtYWlsX2V4dHJh Ij48YnI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPk9uIEZyaSwgT2N0IDcsIDIwMTYgYXQgNDow MiBQTSwgTWljaGFsIFNrcml2YW5layA8c3BhbiBkaXI9Imx0ciI+Jmx0OzxhIGhyZWY9Im1haWx0 bzptaWNoYWwuc2tyaXZhbmVrQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5taWNoYWwuc2ty aXZhbmVrQHJlZGhhdC5jb208L2E+Jmd0Ozwvc3Bhbj4gd3JvdGU6PGJyPjxibG9ja3F1b3RlIGNs YXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAwLjhleDtib3JkZXIt bGVmdDoxcHggc29saWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6MWV4Ij48ZGl2IHN0 eWxlPSJ3b3JkLXdyYXA6YnJlYWstd29yZCI+PGJyPjxkaXY+PHNwYW4gY2xhc3M9ImdtYWlsLSI+ PGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSI+PGRpdj5PbiA3IE9jdCAyMDE2LCBhdCAxNToyOCwgU2lt b25lIFRpcmFib3NjaGkgJmx0OzxhIGhyZWY9Im1haWx0bzpzdGlyYWJvc0ByZWRoYXQuY29tIiB0 YXJnZXQ9Il9ibGFuayI+c3RpcmFib3NAcmVkaGF0LmNvbTwvYT4mZ3Q7IHdyb3RlOjwvZGl2Pjxi cj48ZGl2PjxkaXYgZGlyPSJsdHIiPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGJyPjxk aXYgY2xhc3M9ImdtYWlsX3F1b3RlIj5PbiBGcmksIE9jdCA3LCAyMDE2IGF0IDM6MjUgUE0sIE1p Y2hhbCBTa3JpdmFuZWsgPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVmPSJtYWlsdG86bWljaGFs LnNrcml2YW5la0ByZWRoYXQuY29tIiB0YXJnZXQ9Il9ibGFuayI+bWljaGFsLnNrcml2YW5la0By ZWRoYXQuY29tPC9hPiZndDs8L3NwYW4+IHdyb3RlOjxicj48YmxvY2txdW90ZSBjbGFzcz0iZ21h aWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MHB4IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQ6MXB4 IHNvbGlkIHJnYigyMDQsMjA0LDIwNCk7cGFkZGluZy1sZWZ0OjFleCI+PGRpdiBzdHlsZT0id29y ZC13cmFwOmJyZWFrLXdvcmQiPjxicj48ZGl2PjxzcGFuPjxibG9ja3F1b3RlIHR5cGU9ImNpdGUi PjxkaXY+T24gNyBPY3QgMjAxNiwgYXQgMTQ6NTksIE5pciBTb2ZmZXIgJmx0OzxhIGhyZWY9Im1h aWx0bzpuc29mZmVyQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5uc29mZmVyQHJlZGhhdC5j b208L2E+Jmd0OyB3cm90ZTo8L2Rpdj48YnI+PGRpdj48ZGl2IGRpcj0ibHRyIj48ZGl2IGNsYXNz PSJnbWFpbF9leHRyYSI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPk9uIEZyaSwgT2N0IDcsIDIw MTYgYXQgMzo1MiBQTSwgTWljaGFsIFNrcml2YW5layA8c3BhbiBkaXI9Imx0ciI+Jmx0OzxhIGhy ZWY9Im1haWx0bzptaWNoYWwuc2tyaXZhbmVrQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5t aWNoYWwuc2tyaXZhbmVrQHJlZGhhdC5jb208L2E+Jmd0Ozwvc3Bhbj4gd3JvdGU6PGJyPjxibG9j a3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAwLjhl eDtib3JkZXItbGVmdDoxcHggc29saWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6MWV4 Ij48ZGl2IHN0eWxlPSJ3b3JkLXdyYXA6YnJlYWstd29yZCI+PGJyPjxkaXY+PHNwYW4+PGJsb2Nr cXVvdGUgdHlwZT0iY2l0ZSI+PGRpdj5PbiA3IE9jdCAyMDE2LCBhdCAxNDo0MiwgTmlyIFNvZmZl ciAmbHQ7PGEgaHJlZj0ibWFpbHRvOm5zb2ZmZXJAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsi Pm5zb2ZmZXJAcmVkaGF0LmNvbTwvYT4mZ3Q7IHdyb3RlOjwvZGl2Pjxicj48ZGl2PjxkaXYgZGly PSJsdHIiPjxkaXYgY2xhc3M9ImdtYWlsX2V4dHJhIj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+ T24gV2VkLCBPY3QgNSwgMjAxNiBhdCAxOjMzIFBNLCBTaW1vbmUgVGlyYWJvc2NoaSA8c3BhbiBk aXI9Imx0ciI+Jmx0OzxhIGhyZWY9Im1haWx0bzpzdGlyYWJvc0ByZWRoYXQuY29tIiB0YXJnZXQ9 Il9ibGFuayI+c3RpcmFib3NAcmVkaGF0LmNvbTwvYT4mZ3Q7PC9zcGFuPiB3cm90ZTo8YnI+PGJs b2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFyZ2luOjBweCAwcHggMHB4IDAu OGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwyMDQpO3BhZGRpbmctbGVmdDox ZXgiPjxkaXYgZGlyPSJsdHIiPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGJyPjxkaXYg Y2xhc3M9ImdtYWlsX3F1b3RlIj48c3Bhbj5PbiBXZWQsIE9jdCA1LCAyMDE2IGF0IDEwOjM0IEFN LCBOaXIgU29mZmVyIDxzcGFuIGRpcj0ibHRyIj4mbHQ7PGEgaHJlZj0ibWFpbHRvOm5zb2ZmZXJA cmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPm5zb2ZmZXJAcmVkaGF0LmNvbTwvYT4mZ3Q7PC9z cGFuPiB3cm90ZTo8YnI+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0ibWFy Z2luOjBweCAwcHggMHB4IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIwNCwy MDQpO3BhZGRpbmctbGVmdDoxZXgiPjxkaXYgZGlyPSJsdHIiPjxkaXYgY2xhc3M9ImdtYWlsX2V4 dHJhIj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+PHNwYW4+T24gV2VkLCBPY3QgNSwgMjAxNiBh dCAxMDoyNCBBTSwgU2ltb25lIFRpcmFib3NjaGkgPHNwYW4gZGlyPSJsdHIiPiZsdDs8YSBocmVm PSJtYWlsdG86c3RpcmFib3NAcmVkaGF0LmNvbSIgdGFyZ2V0PSJfYmxhbmsiPnN0aXJhYm9zQHJl ZGhhdC5jb208L2E+Jmd0Ozwvc3Bhbj4gd3JvdGU6PGJyPjxibG9ja3F1b3RlIGNsYXNzPSJnbWFp bF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAwLjhleDtib3JkZXItbGVmdDoxcHgg c29saWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6MWV4Ij48ZGl2IGRpcj0ibHRyIj48 YnI+PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiPjxicj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+ PHNwYW4+T24gV2VkLCBPY3QgNSwgMjAxNiBhdCA5OjE3IEFNLCBncmVnb3IgPHNwYW4gZGlyPSJs dHIiPiZsdDs8YSBocmVmPSJtYWlsdG86Z3JlZ29yX2ZvcnVtQGNhdHJpeC5hdCIgdGFyZ2V0PSJf YmxhbmsiPmdyZWdvcl9mb3J1bUBjYXRyaXguYXQ8L2E+Jmd0Ozwvc3Bhbj4gd3JvdGU6PGJyPjxi bG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAw LjhleDtib3JkZXItbGVmdDoxcHggc29saWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6 MWV4Ij5IaSw8YnI+DQo8YnI+DQpkaWQgeW91IGZvdW5kIGEgc29sdXRpb24gb3IgY2F1c2UgZm9y IHRoaXMgaGlnaCBDUFUgdXNhZ2U/PGJyPg0KSSBoYXZlIGluc3RhbGxlZCB0aGUgc2VsZiBob3N0 ZWQgZW5naW5lIG9uIGFub3RoZXIgc2VydmVyIGFuZCB0aGVyZSBpczxicj4NCm5vIFZNIHJ1bm5p bmcgYnV0IG92aXJ0LWhhLWFnZW50IHVzZXMgaGVhdmlseSB0aGUgQ1BVLjxicj48L2Jsb2NrcXVv dGU+PGRpdj48YnI+PC9kaXY+PC9zcGFuPlllcywgaXQncyBkdWUgdG8gdGhlIGZhY3QgdGhhdCBv dmlydC1oYS1hZ2VudCBwZXJpb2RpY2FsbHkgcmVjb25uZWN0cyBvdmVyIGpzb24gcnBjIGFuZCB0 aGlzIGlzIENQVSBpbnRlbnNpdmUgc2luY2UgdGhlIGNsaWVudCBoYXMgdG8gcGFyc2UgdGhlIHlh bWwgQVBJIHNwZWNpZmljYXRpb24gZWFjaCB0aW1lIGl0IGNvbm5lY3RzLjxicj48L2Rpdj48L2Rp dj48L2Rpdj48L2Jsb2NrcXVvdGU+PC9zcGFuPjwvZGl2PjwvZGl2PjwvZGl2PjwvYmxvY2txdW90 ZT48L3NwYW4+PC9kaXY+PC9kaXY+PC9kaXY+PC9ibG9ja3F1b3RlPjwvZGl2PjwvZGl2PjwvZGl2 PjwvZGl2PjwvYmxvY2txdW90ZT48ZGl2Pjxicj48L2Rpdj48L3NwYW4+d2FzbuKAmXQgaXQgc3Vw cG9zZSB0byBiZSBmaXhlZCB0byByZXVzZSB0aGUgY29ubmVjdGlvbj8gTGlrZSBhbGwgdGhlIG90 aGVyIGNsaWVudHMgKHZkc20gbWlncmF0aW9uIGNvZGU6LSkmbmJzcDs8L2Rpdj48L2Rpdj48L2Js b2NrcXVvdGU+PGRpdj48YnI+PC9kaXY+PGRpdj5UaGlzIGlzIG9ydGhvZ29uYWwgaXNzdWUuPC9k aXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9ibG9ja3F1b3RlPjxkaXY+PGJyPjwvZGl2Pjwv c3Bhbj5ZZXMgaXQgaXMuIEFuZCB0aGF04oCZcyB0aGUgaXNzdWU7LSk8L2Rpdj48ZGl2PkJvdGgg YXJlIHdyb25nLCBidXQgYnkg4oCcZml4aW5n4oCdIHRoZSBzY2hlbWEgdmFsaWRhdGlvbiBvbmx5 IHlvdSBsb3NlIHRoZSBtb3RpdmF0aW9uIHRvIGZpeCB0aGUgbWVhbmluZ2xlc3Mgd2FzdGVmdWwg cmVjb25uZWN0PC9kaXY+PC9kaXY+PC9ibG9ja3F1b3RlPjxkaXY+PGJyPjwvZGl2PjxkaXY+WWVz LCB3ZSBhcmUgZ29pbmcgdG8gZml4IHRoYXQgdG9vICggPGEgaHJlZj0iaHR0cHM6Ly9idWd6aWxs YS5yZWRoYXQuY29tL3Nob3dfYnVnLmNnaT9pZD0xMzQ5ODI5IiB0YXJnZXQ9Il9ibGFuayI+aHR0 cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29tLzx3YnI+c2hvd19idWcuY2dpP2lkPTEzNDk4Mjk8L2E+ Jm5ic3A7KSA8L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Jsb2NrcXVvdGU+PGRpdj48 YnI+PC9kaXY+PC9zcGFuPnRoYXTigJlzIGdyZWF0ISBBbHNvIGFsIHRoZSBvdGhlciB2ZHNDbGll bnQgdXNlcz86LSk8L2Rpdj48L2Rpdj48L2Jsb2NrcXVvdGU+PGRpdj48YnI+PC9kaXY+PGRpdj48 YSBocmVmPSJodHRwczovL2dlcnJpdC5vdmlydC5vcmcvIy9jLzYyNzI5LyI+aHR0cHM6Ly9nZXJy aXQub3ZpcnQub3JnLyMvYy82MjcyOS88L2E+PGJyPjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2Pjwv ZGl2PjwvYmxvY2txdW90ZT48ZGl2Pjxicj48L2Rpdj5Db29sLiBJdCdzIG5vdCBzbyBpbXBvcnRh bnQgZm9yIG9uZSB0aW1lIGFjdGlvbnMgYnV0IHdlIG5lZWQgaXQgdG8gYmUgYWJsZSB0byBkcm9w IHhtbHJwYyBmaW5hbGx5LCBzbyBpdCBpcyBpbXBvcnRhbnQ6KTxkaXY+PGJyPjxibG9ja3F1b3Rl IHR5cGU9ImNpdGUiPjxkaXY+PGRpdiBkaXI9Imx0ciI+PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEi PjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj48ZGl2PiZuYnNwOzwvZGl2PjxibG9ja3F1b3RlIGNs YXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdpbjowcHggMHB4IDBweCAwLjhleDtib3JkZXIt bGVmdDoxcHggc29saWQgcmdiKDIwNCwyMDQsMjA0KTtwYWRkaW5nLWxlZnQ6MWV4Ij48ZGl2IHN0 eWxlPSJ3b3JkLXdyYXA6YnJlYWstd29yZCI+PGRpdj5XaGF0IGlzIHRoYXQgcGVyaW9kaWMgb25l IGNhbGwgYW55d2F5PyBJcyB0aGVyZSBvbmx5IG9uZT8gTWF5YmUgd2UgZG9u4oCZdCBuZWVkIGl0 IHNvIG11Y2guPC9kaXY+PC9kaXY+PC9ibG9ja3F1b3RlPjxkaXY+PGJyPjwvZGl2PjxkaXY+Q3Vy cmVudGx5IG92aXJ0LWhhLWFnZW50IGlzIHBlcmlvZGljYWxseSByZWNvbm5lY3RpbmcgdGhlIGhv c3RlZC1lbmdpbmUgc3RvcmFnZSBkb21haW4gYW5kIGNoZWNraW5nIGl0cyBzdGF0dXMuIFRoaXMg aXMgYWxyZWFkeSBvbiBqc29ucnBjLjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvYmxv Y2txdW90ZT48ZGl2Pjxicj48L2Rpdj5Pay4gQXMgbG9uZyBhcyBpdCBpcyBsb3cgZnJlcXVlbmN5 IGl0J3Mgb2suIEp1c3QgYmVhciBpbiBtaW5kIGZvciBmdXR1cmUgdGhhdCBzb21lIG9mIHRoZSB2 ZHNtIGNhbGxzIG1heSBiZSBoZWF2eSBhbmQgaW1wYWN0IHBlcmZvcm1hbmNlLCByZWdhcmRsZXNz IHRoZSBjb21tdW5pY2F0aW9uIGxheWVyLiZuYnNwOzwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+ VGhhbmtzIGFuZCBoYXZlIGEgbmljZSB3ZWVrZW5kLDwvZGl2PjxkaXY+bWljaGFsPC9kaXY+PGRp dj48YnI+PGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSI+PGRpdj48ZGl2IGRpcj0ibHRyIj48ZGl2IGNs YXNzPSJnbWFpbF9leHRyYSI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPjxkaXY+SW4gNC4xIGFs bCB0aGUgbW9uaXRvcmluZyB3aWxsIGJlIG1vdmVkIHRvIGpzb25ycGMuPC9kaXY+PGRpdj4mbmJz cDs8L2Rpdj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MHB4 IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQ6MXB4IHNvbGlkIHJnYigyMDQsMjA0LDIwNCk7cGFk ZGluZy1sZWZ0OjFleCI+PGRpdiBzdHlsZT0id29yZC13cmFwOmJyZWFrLXdvcmQiPjxkaXY+PHNw YW4gY2xhc3M9ImdtYWlsLSI+PGJyPjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiPjxkaXY+PGRpdiBk aXI9Imx0ciI+PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiPjxkaXYgY2xhc3M9ImdtYWlsX3F1b3Rl Ij48ZGl2PmJ1dCBpdCB3b3VsZCByZXF1aXJlIGFsc28gPGEgaHJlZj0iaHR0cHM6Ly9idWd6aWxs YS5yZWRoYXQuY29tL3Nob3dfYnVnLmNnaT9pZD0xMzc2ODQzIiB0YXJnZXQ9Il9ibGFuayI+aHR0 cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29tLzx3YnI+c2hvd19idWcuY2dpP2lkPTEzNzY4NDM8L2E+ Jm5ic3A7dG8gYmUgZml4ZWQuPC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9ibG9ja3F1 b3RlPjxkaXY+PGJyPjwvZGl2Pjwvc3Bhbj5UaGlzIGlzIGxlc3MgZ29vZC4gV2VsbCwgd29yc3Qg Y2FzZSB5b3UgY2FuIHJlY29ubmVjdCB5b3Vyc2VsZiwgYWxsIHlvdSBuZWVkIGlzIGEgbm90aWZp Y2F0aW9uIHdoZW4gdGhlIGV4aXN0aW5nIGNvbm5lY3Rpb24gYnJlYWtzPC9kaXY+PGRpdj48c3Bh biBjbGFzcz0iZ21haWwtIj48YnI+PGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSI+PGRpdj48ZGl2IGRp cj0ibHRyIj48ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUi PjxkaXY+Jm5ic3A7PC9kaXY+PGJsb2NrcXVvdGUgY2xhc3M9ImdtYWlsX3F1b3RlIiBzdHlsZT0i bWFyZ2luOjBweCAwcHggMHB4IDAuOGV4O2JvcmRlci1sZWZ0OjFweCBzb2xpZCByZ2IoMjA0LDIw NCwyMDQpO3BhZGRpbmctbGVmdDoxZXgiPjxkaXYgc3R5bGU9IndvcmQtd3JhcDpicmVhay13b3Jk Ij48ZGl2PjxzcGFuPjxicj48YmxvY2txdW90ZSB0eXBlPSJjaXRlIj48ZGl2PjxkaXYgZGlyPSJs dHIiPjxkaXYgY2xhc3M9ImdtYWlsX2V4dHJhIj48ZGl2IGNsYXNzPSJnbWFpbF9xdW90ZSI+PGRp dj4mbmJzcDs8L2Rpdj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJtYXJn aW46MHB4IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQ6MXB4IHNvbGlkIHJnYigyMDQsMjA0LDIw NCk7cGFkZGluZy1sZWZ0OjFleCI+PGRpdiBzdHlsZT0id29yZC13cmFwOmJyZWFrLXdvcmQiPjxk aXY+RG9lcyBzY2hlbWEgdmFsaWRhdGlvbiBtYXR0ZXIgdGhlbiBpZiB0aGVyZSB3b3VsZCBiZSBv bmx5IG9uZSBjb25uZWN0aW9uIGF0IHRoZSBzdGFydCB1cD88L2Rpdj48L2Rpdj48L2Jsb2NrcXVv dGU+PGRpdj48YnI+PC9kaXY+PGRpdj5Mb2FkaW5nIG9uY2UgZG9lcyBub3QgaGVscCBjb21tYW5k IGxpbmUgdG9vbHMgbGlrZSB2ZHNDbGllbnQsIGhvc3RlZC1lbmdpbmUgYW5kPC9kaXY+PGRpdj52 ZHNtLXRvb2wuJm5ic3A7PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9ibG9ja3F1b3Rl PjxkaXY+PGJyPjwvZGl2Pjwvc3Bhbj5ub25lIG9mIHRoZSBvdGhlciB0b29scyBpcyB1c2luZyBq c29uLXJwYy48L2Rpdj48L2Rpdj48L2Jsb2NrcXVvdGU+PGRpdj48YnI+PC9kaXY+PGRpdj5ob3N0 ZWQtZW5naW5lLXNldHVwIGlzLCBhbmQgc29vbmVyIG9yIGxhdGVyIHdlJ2xsIGhhdmUgdG8gbWln cmF0ZSBhbHNvIHRoZSByZW1haW5pbmcgdG9vbHMgc2luY2UgeG1scnBjIGhhcyBiZWVuIGRlcHJl Y2F0ZWQgd2l0aCA0LjA8L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Rpdj48L2Jsb2NrcXVvdGU+ PGRpdj48YnI+PC9kaXY+PC9zcGFuPm9rLiB0aG91Z2ggc2V0dXAgaXMgYSBvbmUtdGltZSBhY3Rp b24gc28gaXTigJlzIG5vdCBhbiBpc3N1ZSB0aGVyZTwvZGl2PjxkaXY+PGRpdiBjbGFzcz0iZ21h aWwtaDUiPjxkaXY+PGJyPjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiPjxkaXY+PGRpdiBkaXI9Imx0 ciI+PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiPjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj48ZGl2 PiZuYnNwOzwvZGl2PjxibG9ja3F1b3RlIGNsYXNzPSJnbWFpbF9xdW90ZSIgc3R5bGU9Im1hcmdp bjowcHggMHB4IDBweCAwLjhleDtib3JkZXItbGVmdDoxcHggc29saWQgcmdiKDIwNCwyMDQsMjA0 KTtwYWRkaW5nLWxlZnQ6MWV4Ij48ZGl2IHN0eWxlPSJ3b3JkLXdyYXA6YnJlYWstd29yZCI+PHNw YW4+PGRpdj48YnI+PGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSI+PGRpdj48ZGl2IGRpcj0ibHRyIj48 ZGl2IGNsYXNzPSJnbWFpbF9leHRyYSI+PGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPjxkaXY+PGJy PjwvZGl2PjxkaXY+TmlyPC9kaXY+PGRpdj4mbmJzcDs8L2Rpdj48YmxvY2txdW90ZSBjbGFzcz0i Z21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MHB4IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQ6 MXB4IHNvbGlkIHJnYigyMDQsMjA0LDIwNCk7cGFkZGluZy1sZWZ0OjFleCI+PGRpdiBzdHlsZT0i d29yZC13cmFwOmJyZWFrLXdvcmQiPjxkaXY+PGJyPjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiPjxk aXY+PHNwYW4+PGRpdiBkaXI9Imx0ciI+PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiPjxkaXYgY2xh c3M9ImdtYWlsX3F1b3RlIj48YmxvY2txdW90ZSBjbGFzcz0iZ21haWxfcXVvdGUiIHN0eWxlPSJt YXJnaW46MHB4IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQ6MXB4IHNvbGlkIHJnYigyMDQsMjA0 LDIwNCk7cGFkZGluZy1sZWZ0OjFleCI+PGRpdiBkaXI9Imx0ciI+PGRpdiBjbGFzcz0iZ21haWxf ZXh0cmEiPjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj48c3Bhbj48YmxvY2txdW90ZSBjbGFzcz0i Z21haWxfcXVvdGUiIHN0eWxlPSJtYXJnaW46MHB4IDBweCAwcHggMC44ZXg7Ym9yZGVyLWxlZnQ6 MXB4IHNvbGlkIHJnYigyMDQsMjA0LDIwNCk7cGFkZGluZy1sZWZ0OjFleCI+PGRpdiBkaXI9Imx0 ciI+PGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiPjxkaXYgY2xhc3M9ImdtYWlsX3F1b3RlIj48c3Bh bj48ZGl2Pjxicj48L2Rpdj48L3NwYW4+PGRpdj5TaW1vbmUsIHJldXNpbmcgdGhlIGNvbm5lY3Rp b24gaXMgZ29vZCBpZGVhIGFueXdheSwgYnV0IHdoYXQgeW91IGRlc2NyaWJlIGlzJm5ic3A7PC9k aXY+PGRpdj5hIGJ1ZyBpbiB0aGUgY2xpZW50IGxpYnJhcnkuIFRoZSBsaWJyYXJ5IGRvZXMgKm5v dCogbmVlZCB0byBsb2FkIGFuZCBwYXJzZSB0aGU8L2Rpdj48ZGl2PnNjaGVtYSBhdCBhbGwgZm9y IHNlbmRpbmcgcmVxdWVzdHMgdG8gdmRzbS48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlRoZSBz Y2hlbWEgaXMgb25seSBuZWVkZWQgaWYgeW91IHdhbnQgdG8gdmVyaWZ5IHJlcXVlc3QgcGFyYW1l dGVycyw8L2Rpdj48ZGl2Pm9yIHByb3ZpZGUgb25saW5lIGhlbHAsIHRoZXNlIGFyZSBub3QgbmVl ZGVkIGluIGEgY2xpZW50IGxpYnJhcnkuPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5QbGVhc2Ug ZmlsZSBhbiBpbmZyYSBidWcgYWJvdXQgaXQuPC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9ibG9j a3F1b3RlPjxkaXY+PGJyPjwvZGl2Pjwvc3Bhbj48ZGl2PkRvbmUsIDxhIGhyZWY9Imh0dHBzOi8v YnVnemlsbGEucmVkaGF0LmNvbS9zaG93X2J1Zy5jZ2k/aWQ9MTM4MTg5OSIgdGFyZ2V0PSJfYmxh bmsiPmh0dHBzOi8vYnVnemlsbGEucmVkaGF0LmNvbS9zaDx3YnI+b3dfYnVnLmNnaT9pZD0xMzgx ODk5PC9hPjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvYmxvY2txdW90ZT48ZGl2Pjxicj48L2Rp dj48ZGl2PkhlcmUgaXMgYSBwYXRjaCB0aGF0IHNob3VsZCBlbGltaW5hdGUgbW9zdCBtb3N0IG9m IHRoZSBwcm9ibGVtOjwvZGl2PjxkaXY+PGEgaHJlZj0iaHR0cHM6Ly9nZXJyaXQub3ZpcnQub3Jn LzY1MjMwIiB0YXJnZXQ9Il9ibGFuayI+aHR0cHM6Ly9nZXJyaXQub3ZpcnQub3JnLzY1MjMwPC9h Pjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PldvdWxkIGJlIG5pY2UgaWYgaXQgY2FuIGJl IHRlc3RlZCBvbiB0aGUgc3lzdGVtIHNob3dpbmcgdGhpcyBwcm9ibGVtLjwvZGl2PjxkaXY+PGJy PjwvZGl2PjxkaXY+Q2hlZXJzLDwvZGl2PjxkaXY+TmlyPC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+ PC9zcGFuPjxzcGFuPg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fPHdicj5fX19fX19f X19fX19fX19fXzxicj5Vc2VycyBtYWlsaW5nIGxpc3Q8YnI+PGEgaHJlZj0ibWFpbHRvOlVzZXJz QG92aXJ0Lm9yZyIgdGFyZ2V0PSJfYmxhbmsiPlVzZXJzQG92aXJ0Lm9yZzwvYT48YnI+PGEgaHJl Zj0iaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIiB0YXJnZXQ9 Il9ibGFuayI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuPHdicj4vbGlzdGluZm8vdXNl cnM8L2E+PGJyPjwvc3Bhbj48L2Rpdj48L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjwvZGl2PjwvYmxv Y2txdW90ZT48L2Rpdj48YnI+PC9kaXY+PC9kaXY+DQpfX19fX19fX19fX19fX19fX19fX19fX19f X19fX188d2JyPl9fX19fX19fX19fX19fX19fPGJyPlVzZXJzIG1haWxpbmcgbGlzdDxicj48YSBo cmVmPSJtYWlsdG86VXNlcnNAb3ZpcnQub3JnIiB0YXJnZXQ9Il9ibGFuayI+VXNlcnNAb3ZpcnQu b3JnPC9hPjxicj48YSBocmVmPSJodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGlu Zm8vdXNlcnMiIHRhcmdldD0iX2JsYW5rIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW48 d2JyPi9saXN0aW5mby91c2VyczwvYT48YnI+PC9kaXY+PC9ibG9ja3F1b3RlPjwvZGl2Pjxicj48 L3NwYW4+PC9kaXY+PGJyPl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fXzx3YnI+X19fX19f X19fX19fX19fX188YnI+DQpVc2VycyBtYWlsaW5nIGxpc3Q8YnI+DQo8YSBocmVmPSJtYWlsdG86 VXNlcnNAb3ZpcnQub3JnIiB0YXJnZXQ9Il9ibGFuayI+VXNlcnNAb3ZpcnQub3JnPC9hPjxicj4N CjxhIGhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2VycyIg cmVsPSJub3JlZmVycmVyIiB0YXJnZXQ9Il9ibGFuayI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9t YWlsbWFuPHdicj4vbGlzdGluZm8vdXNlcnM8L2E+PGJyPg0KPGJyPjwvYmxvY2txdW90ZT48L2Rp dj48YnI+PC9kaXY+PC9kaXY+DQpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX188d2JyPl9f X19fX19fX19fX19fX19fPGJyPlVzZXJzIG1haWxpbmcgbGlzdDxicj48YSBocmVmPSJtYWlsdG86 VXNlcnNAb3ZpcnQub3JnIiB0YXJnZXQ9Il9ibGFuayI+VXNlcnNAb3ZpcnQub3JnPC9hPjxicj48 YSBocmVmPSJodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMiIHRh cmdldD0iX2JsYW5rIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnLzx3YnI+bWFpbG1hbi9saXN0aW5m by91c2VyczwvYT48YnI+PC9kaXY+PC9ibG9ja3F1b3RlPjwvZGl2Pjxicj48L2Rpdj48L2Rpdj48 L2Rpdj48L2Jsb2NrcXVvdGU+PC9kaXY+PGJyPjwvZGl2PjwvZGl2Pg0KPC9kaXY+PC9ibG9ja3F1 b3RlPjwvZGl2PjwvYm9keT48L2h0bWw+ --Apple-Mail-E4495591-332A-4D66-8A14-9A9B083028DA--
participants (8)
-
Gianluca Cecchi
-
gregor
-
Michal Skrivanek
-
Michal Skrivanek
-
Nir Soffer
-
Roy Golan
-
Sam Cappello
-
Simone Tiraboschi