<div dir="ltr">Do you know when .34 will be released?<div><br></div><div><a href="http://mirror.centos.org/centos/7/virt/x86_64/ovirt-3.6/">http://mirror.centos.org/centos/7/virt/x86_64/ovirt-3.6/</a><br></div><div>Latest version is:</div><div><table style="font-family:"dejavu sans","liberation sans",sans-serif"><tbody><tr><td>vdsm-cli-4.17.32-1.el7.noarch.rpm</td><td align="right">08-Aug-2016 17:36</td></tr></tbody></table></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Oct 14, 2016 at 1:11 AM, Francesco Romani <span dir="ltr"><<a href="mailto:fromani@redhat.com" target="_blank">fromani@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"><br>
----- Original Message -----<br>
> From: "Simone Tiraboschi" <<a href="mailto:stirabos@redhat.com">stirabos@redhat.com</a>><br>
> To: "Steve Dainard" <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>>, "Francesco Romani" <<a href="mailto:fromani@redhat.com">fromani@redhat.com</a>><br>
> Cc: "users" <<a href="mailto:users@ovirt.org">users@ovirt.org</a>><br>
> Sent: Friday, October 14, 2016 9:59:49 AM<br>
> Subject: Re: [ovirt-users] Ovirt Hypervisor vdsm.Scheduler logs fill partition<br>
><br>
> On Fri, Oct 14, 2016 at 1:12 AM, Steve Dainard <<a href="mailto:sdainard@spd1.com">sdainard@spd1.com</a>> wrote:<br>
><br>
> > Hello,<br>
> ><br>
> > I had a hypervisor semi-crash this week, 4 of ~10 VM's continued to run,<br>
> > but the others were killed off somehow and all VM's running on this host<br>
> > had '?' status in the ovirt UI.<br>
> ><br>
> > This appears to have been caused by vdsm logs filling up disk space on the<br>
> > logging partition.<br>
> ><br>
> > I've attached the log file vdsm.log.27.xz which shows this error:<br>
> ><br>
> > vdsm.Scheduler::DEBUG::2016-<wbr>10-11<br>
> > 16:42:09,318::executor::216::<wbr>Executor::(_discard)<br>
> > Worker discarded: <Worker name=periodic/3017 running <Operation<br>
> > action=<VmDispatcher operation=<class<br>
> > 'virt.periodic.<wbr>DriveWatermarkMonitor'><br>
> > at 0x7f8e90021210> at 0x7f8e90021250> discarded at 0x7f8dd123e850><br>
> ><br>
> > which happens more and more frequently throughout the log.<br>
> ><br>
> > It was a bit difficult to understand what caused the failure, but the logs<br>
> > were getting really large, then being xz'd which compressed 11G+ into a few<br>
> > MB. Once this happened the disk space would be freed, and nagios wouldn't<br>
> > hit the 3rd check to throw a warning, until pretty much right at the crash.<br>
> ><br>
> > I was able to restart vdsmd to resolve the issue, but I still need to know<br>
> > why these logs started to stack up so I can avoid this issue in the future.<br>
> ><br>
><br>
> We had this one: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1383259" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/<wbr>show_bug.cgi?id=1383259</a><br>
> but in your case the logs are rotating.<br>
> Francesco?<br>
<br>
</div></div>Hi,<br>
<br>
yes, it is a different issue. Here the log messages are caused by the Worker threads<br>
of the periodic subsystem, which are leaking[1].<br>
This was a bug in Vdsm (insufficient protection against rogue domains), but the<br>
real problem is that some of your domain are being unresponsive at hypervisor level.<br>
The most likely cause is in turn unresponsive storages.<br>
<br>
Fixes are been committed and shipped with Vdsm 4.17.34.<br>
<br>
See: ttps://<a href="http://bugzilla.redhat.com/1364925" rel="noreferrer" target="_blank">bugzilla.redhat.com/<wbr>1364925</a><br>
<br>
HTH,<br>
<br>
+++<br>
<br>
[1] actually, they are replaced too quickly, leading to unbound growth.<br>
So those aren't actually "leaking", Vdsm is just overzealous handling one error condition,<br>
making things worse than before.<br>
Still serious issue, no doubt, but quite different cause.<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Francesco Romani<br>
Red Hat Engineering Virtualization R & D<br>
Phone: 8261328<br>
IRC: fromani<br>
</font></span></blockquote></div><br></div>