<div dir="ltr">Maybe found a workaround on the NFS server side, a option for the mountd service<div><br></div><div><br></div><div><div> -S Tell mountd to suspend/resume execution of the nfsd threads when-</div><div>
ever the exports list is being reloaded. This avoids intermit-</div><div> tent access errors for clients that do NFS RPCs while the exports</div><div> are being reloaded, but introduces a delay in RPC response while</div>
<div> the reload is in progress. If mountd crashes while an exports</div><div> load is in progress, mountd must be restarted to get the nfsd</div><div> threads running again, if this option is used.</div>
</div><div><br></div><div>so far, i was able to reload the exports list twice, without any random suspended vm. lets see if this is a real solution or if i just had luck two times.</div><div><br></div><div>but i am still interested in parameters which make the vdsm more tolerant to short interruptions. instant suspend of a vm after such a short "outage" is not very nice.</div>
<div><br></div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 24, 2013 at 11:04 PM, squadra <span dir="ltr"><<a href="mailto:squadra@gmail.com" target="_blank">squadra@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Folks,<div><br></div><div>i got a Setup running with the following Specs</div><div><br></div><div>4 VM Hosts - CentOS 6.4 - latest Ovirt 3.2 from dreyou</div>
<div><br></div><div><div>vdsm-xmlrpc-4.10.3-0.36.23.el6.noarch</div>
<div>vdsm-cli-4.10.3-0.36.23.el6.noarch</div><div>vdsm-python-4.10.3-0.36.23.el6.x86_64</div><div>vdsm-4.10.3-0.36.23.el6.x86_64</div></div><div><div>qemu-kvm-rhev-tools-0.12.1.2-2.355.el6.5.x86_64</div><div>qemu-kvm-rhev-0.12.1.2-2.355.el6.5.x86_64</div>
<div>qemu-img-rhev-0.12.1.2-2.355.el6.5.x86_64</div><div>gpxe-roms-qemu-0.9.7-6.9.el6.noarch</div></div><div><br></div><div>Management Node is also running latest 3.2 from dreyou</div><div><br></div><div><div>ovirt-engine-cli-3.2.0.10-1.el6.noarch</div>
<div>ovirt-engine-jbossas711-1-0.x86_64</div><div>ovirt-engine-tools-3.2.1-1.41.el6.noarch</div><div>ovirt-engine-backend-3.2.1-1.41.el6.noarch</div><div>ovirt-engine-sdk-3.2.0.9-1.el6.noarch</div><div>ovirt-engine-userportal-3.2.1-1.41.el6.noarch</div>
<div>ovirt-engine-setup-3.2.1-1.41.el6.noarch</div><div>ovirt-engine-webadmin-portal-3.2.1-1.41.el6.noarch</div><div>ovirt-engine-dbscripts-3.2.1-1.41.el6.noarch</div><div>ovirt-engine-3.2.1-1.41.el6.noarch</div><div>ovirt-engine-genericapi-3.2.1-1.41.el6.noarch</div>
<div>ovirt-engine-restapi-3.2.1-1.41.el6.noarch</div></div><div><br></div><div><br></div><div>VM are running from a Freebsd 9.1 NFS Server, which works absolutly flawless until i need to reload the /etc/exports File on the NFS Server. For this, the NFS Server itself doesnt need to be restarted, just the mountd Daemon is "Hup´ed". </div>
<div><br></div><div>But after sending a HUP to the mountd, Ovirt immidiatly thinks that there was a problem with the storage backend and suspends random some VM. Luckily this VM can be resumed instant without further issues. </div>
<div><br></div><div>The VM Hosts dont show any NFS related errors, so i expect the vdsm or engine to check the nfs server continous. </div><div><div><br></div><div>The only thing i can find in the vdsm.log of a related host is</div>
<div><br></div><div>-- snip --</div><div><br></div><div><div>Thread-539::DEBUG::2013-07-24 22:29:46,935::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}</div><div>Thread-539::DEBUG::2013-07-24 22:29:46,935::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div>
<div>Thread-539::DEBUG::2013-07-24 22:29:46,935::task::957::TaskManager.Task::(_decref) Task=`9332cd24-d899-4226-b0a2-93544ee737b4`::ref 0 aborting False</div><div>libvirtEventLoop::INFO::2013-07-24 22:29:55,142::libvirtvm::2509::vm.Vm::(_onAbnormalStop) vmId=`244f6c8d-bc2b-4669-8f6d-bd957222b946`::abnormal vm stop device virtio-disk0 error e</div>
<div>other</div><div>libvirtEventLoop::DEBUG::2013-07-24 22:29:55,143::libvirtvm::3079::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`244f6c8d-bc2b-4669-8f6d-bd957222b946`::event Suspended detail 2 opaque No</div><div>ne</div>
<div>
libvirtEventLoop::INFO::2013-07-24 22:29:55,143::libvirtvm::2509::vm.Vm::(_onAbnormalStop) vmId=`244f6c8d-bc2b-4669-8f6d-bd957222b946`::abnormal vm stop device virtio-disk0 error e</div><div>other</div></div><div><br></div>
<div><br></div><div>-- snip --</div><div><br></div><div>i am a little bit at a dead end currently, since reloading a nfs servers export table isnt a unusual task and everything is working like expected. just ovirt seems way to picky. </div>
<div><br></div><div>is there any possibility to make this check a little bit more tolerant?</div><div><br></div><div>i try setting "sd_health_check_delay = 30" in vdsm.conf, but this didnt change anything.</div>
<div><br></div><div>anyone got an idea how i can get rid of this annoying problem?</div><div><br></div><div>Cheers,</div><div><br></div><div>Juergen</div><span class="HOEnZb"><font color="#888888"><div><br></div><div><br>
</div>-- <br><pre>Sent from the Delta quadrant using Borg technology!</pre>
</font></span></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><pre>Sent from the Delta quadrant using Borg technology!</pre>
</div>