[ovirt-users] ovirt-ha-agent cpu usage

Sam Cappello samc at oracool.net
Fri Oct 7 17:05:59 UTC 2016


it worked for me as well - load avg. < 1 now, ovirt-ha-agent pops up in 
top periodically, but not on top using 100% CPU all the time anymore.
thanks!
sam

Gianluca Cecchi wrote on 10/7/2016 10:13 AM:
>
>
> On Fri, Oct 7, 2016 at 3:35 PM, Simone Tiraboschi <stirabos at redhat.com 
> <mailto:stirabos at redhat.com>> wrote:
>
>
>
>     On Fri, Oct 7, 2016 at 3:22 PM, Gianluca Cecchi
>     <gianluca.cecchi at gmail.com <mailto:gianluca.cecchi at gmail.com>> wrote:
>
>         On Fri, Oct 7, 2016 at 2:59 PM, Nir Soffer <nsoffer at redhat.com
>         <mailto:nsoffer at redhat.com>> wrote:
>
>
>                 wasn’t it suppose to be fixed to reuse the connection?
>                 Like all the other clients (vdsm migration code:-)
>
>
>             This is orthogonal issue.
>
>                 Does schema validation matter then if there would be
>                 only one connection at the start up?
>
>
>             Loading once does not help command line tools like
>             vdsClient, hosted-engine and
>             vdsm-tool.
>
>             Nir
>
>
>>
>>                         Simone, reusing the connection is good idea
>>                         anyway, but what you describe is
>>                         a bug in the client library. The library does
>>                         *not* need to load and parse the
>>                         schema at all for sending requests to vdsm.
>>
>>                         The schema is only needed if you want to
>>                         verify request parameters,
>>                         or provide online help, these are not needed
>>                         in a client library.
>>
>>                         Please file an infra bug about it.
>>
>>
>>                     Done,
>>                     https://bugzilla.redhat.com/show_bug.cgi?id=1381899
>>                     <https://bugzilla.redhat.com/show_bug.cgi?id=1381899>
>>
>>
>>                 Here is a patch that should eliminate most most of
>>                 the problem:
>>                 https://gerrit.ovirt.org/65230
>>
>>                 Would be nice if it can be tested on the system
>>                 showing this problem.
>>
>>                 Cheers,
>>                 Nir
>>                 _______________________________________________
>
>
>
>         this is a video of 1 minute with the same system as the first
>         post, but in 4.0.3 now and the same 3 VMs powered on without
>         any particular load.
>         It seems very similar to the previous 3.6.6 in cpu used by
>         ovirt-ha-agent.
>
>         https://drive.google.com/file/d/0BwoPbcrMv8mvSjFDUERzV1owTG8/view?usp=sharing
>         <https://drive.google.com/file/d/0BwoPbcrMv8mvSjFDUERzV1owTG8/view?usp=sharing>
>
>         Enjoy Nir ;-)
>
>         If I can apply the patch also to 4.0.3 I'm going to see if
>         there is then a different behavior.
>         Let me know,
>
>
>     I'm trying it right now.
>     Any other tests will be really appreciated.
>
>     The patch is pretty simply, you can apply that on the fly.
>     You have to shutdown ovirt-ha-broker and ovirt-ha-agent; then you
>     could directly edit
>     /usr/lib/python2.7/site-packages/api/vdsmapi.py
>     around line 97 changing from
>     loaded_schema = yaml.load(f)
>     to
>     loaded_schema = yaml.load(f, Loader=yaml.CLoader)
>     Please pay attention to keep exactly the same amount of initial
>     spaces.
>
>     Then you can simply restart the HA agent and check.
>
>         Gianluca
>
>         _______________________________________________
>         Users mailing list
>         Users at ovirt.org <mailto:Users at ovirt.org>
>         http://lists.ovirt.org/mailman/listinfo/users
>         <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
> What I've done (I didn't read your answer in between and this is a 
> test system not so important... )
>
> set to global maintenance
> patch vdsmapi.py
> restart vdsmd
> restart ovirt-ha-agent
> set maintenance to none
>
> And a bright new 3-minutes video here:
> https://drive.google.com/file/d/0BwoPbcrMv8mvVzBPUVRQa1pwVnc/view?usp=sharing
>
> It seems that now ovirt-ha-agent or is not present in top cpu process 
> or at least has ranges between 5% and 12% and not more....
>
> BTW: I see this in vdsm status now
>
> [root at ovirt01 api]# systemctl status vdsmd
> ● vdsmd.service - Virtual Desktop Server Manager
>    Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; 
> vendor preset: enabled)
>    Active: active (running) since Fri 2016-10-07 15:30:57 CEST; 32min ago
>   Process: 20883 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh 
> --post-stop (code=exited, status=0/SUCCESS)
>   Process: 20886 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh 
> --pre-start (code=exited, status=0/SUCCESS)
>  Main PID: 21023 (vdsm)
>    CGroup: /system.slice/vdsmd.service
>            ├─21023 /usr/bin/python /usr/share/vdsm/vdsm
>            ├─21117 /usr/libexec/ioprocess --read-pipe-fd 41 
> --write-pipe-fd 40 --max-threads 10 --max-queue...
>            ├─21123 /usr/libexec/ioprocess --read-pipe-fd 48 
> --write-pipe-fd 46 --max-threads 10 --max-queue...
>            ├─21134 /usr/libexec/ioprocess --read-pipe-fd 57 
> --write-pipe-fd 56 --max-threads 10 --max-queue...
>            ├─21143 /usr/libexec/ioprocess --read-pipe-fd 65 
> --write-pipe-fd 64 --max-threads 10 --max-queue...
>            ├─21149 /usr/libexec/ioprocess --read-pipe-fd 73 
> --write-pipe-fd 72 --max-threads 10 --max-queue...
>            ├─21156 /usr/libexec/ioprocess --read-pipe-fd 80 
> --write-pipe-fd 78 --max-threads 10 --max-queue...
>            ├─21177 /usr/libexec/ioprocess --read-pipe-fd 88 
> --write-pipe-fd 87 --max-threads 10 --max-queue...
>            ├─21204 /usr/libexec/ioprocess --read-pipe-fd 99 
> --write-pipe-fd 98 --max-threads 10 --max-queue...
>            └─21239 /usr/libexec/ioprocess --read-pipe-fd 111 
> --write-pipe-fd 110 --max-threads 10 --max-que...
>
> Oct 07 16:02:52 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> 
> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading 
> data... eof
> Oct 07 16:02:54 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> 
> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading 
> data... eof
> Oct 07 16:02:56 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> 
> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading 
> data... eof
> Oct 07 16:02:58 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> 
> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading 
> data... eof
> Oct 07 16:03:11 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> 
> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading 
> data... eof
> Oct 07 16:03:15 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> 
> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading 
> data... eof
> Oct 07 16:03:15 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> 
> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading 
> data... eof
> Oct 07 16:03:18 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> 
> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading 
> data... eof
> Oct 07 16:03:20 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> 
> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading 
> data... eof
> Oct 07 16:03:22 ovirt01.lutwyn.org <http://ovirt01.lutwyn.org> 
> vdsm[21023]: vdsm vds.dispatcher ERROR SSL error during reading 
> data... eof
> Hint: Some lines were ellipsized, use -l to show in full.
> [root at ovirt01 api]#
>
> Now I also restarted ovirt-ha-broker just in case
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161007/fa03d025/attachment-0001.html>


More information about the Users mailing list