<div dir="ltr"><br><div class="gmail_extra"><div class="gmail_quote">On Tue, Sep 19, 2017 at 12:44 PM, Alex K <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>A second test did not yield the same result. <br></div>This time the VMs were restarted to another host and when the lost host recovered no VMs were running on it. <br></div>Seems that there is a racing issue somewhere. <br></div></div></blockquote><div><br></div><div>Did you test with the same VM? were the disks + lease located on the same storage domains in both tests? did the VM run on the same host (and if not, is the libvirt + qemu versions different between the two?).</div><div>It may be a racing issue but not necessarily. There is an observation in the bug I mentioned before that it happens only (/more) with certain storage types...</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div></div><div><br></div><div>Thanx, <br></div><div>Alex<br></div><br></div><div class="m_-1812038979085810701HOEnZb"><div class="m_-1812038979085810701h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 19, 2017 at 11:52 AM, Arik Hadas <span dir="ltr"><<a href="mailto:ahadas@redhat.com" target="_blank">ahadas@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Tue, Sep 19, 2017 at 11:41 AM, Alex K <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div>Hi again, <br><br></div>I performed a different test by isolating one host (say host A) through removing all its network interfaces (thus power management through IPMI was also not avaialble). <br></div>The VMs (with VM lease enabled) were successfully restarted to another host. <br></div>When connecting back the host A, the cluster performed a power management and the host became a member of the cluster. <br></div>The VMs that were running on the host A were found "paused", which is normal. <br></div>After 15 minutes I see that the VMs at host A are still in "paused" state and I would expect that the cluster should decide at some point to shutdown the paused VMs and continue with the VMs that are already running at other hosts. <br><br></div>Is this behavior normal?<br></div></div></div></blockquote><div><br></div></span><div>I believe it is not the expected behavior - the VM should not stay in paused state when its lease expires. But we know about this, see comment 9 in [1].</div><div><br></div><div>[1] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1459865" target="_blank">https://bugzilla.redhat.co<wbr>m/show_bug.cgi?id=1459865</a></div><div><div class="m_-1812038979085810701m_-2636759949192880359h5"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><br></div>Thanx, <br></div>Alex<br></div><div class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-HOEnZb"><div class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 19, 2017 at 10:18 AM, Alex K <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div>Hi All, <br><br></div>Just completed the tests and it works great. <br></div>VM leases is just what I needed. <br><br></div>Thanx, <br></div>Alex<br></div><div class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-m_-106319445438494848HOEnZb"><div class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-m_-106319445438494848h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 19, 2017 at 10:16 AM, Yaniv Kaul <span dir="ltr"><<a href="mailto:ykaul@redhat.com" target="_blank">ykaul@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Tue, Sep 19, 2017 at 1:00 AM, Alex K <span dir="ltr"><<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="auto">Enabling VM leases could be an answer to this. Will test tomorrow.<div dir="auto"><br></div></div></blockquote><div><br></div></span><div>Indeed. Let us know how it worked for you.</div><span><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto"></div><div dir="auto">Thanx,</div><div dir="auto">Alex</div></div><div class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-m_-106319445438494848m_2538821917102434284m_704387649189278140HOEnZb"><div class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-m_-106319445438494848m_2538821917102434284m_704387649189278140h5"><div class="gmail_extra"><br><div class="gmail_quote">On Sep 18, 2017 7:50 PM, "Alex K" <<a href="mailto:rightkicktech@gmail.com" target="_blank">rightkicktech@gmail.com</a>> wrote:<br type="attribution"><blockquote class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-m_-106319445438494848m_2538821917102434284m_704387649189278140m_672168755452860295quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div>Hi All, <br><br></div>I have the following issue with the HA behavior of oVirt 4.1 and need to check with you if there is any work around from your experience. <br><br></div>I have 3 servers (A, B, C) with hosted engine in self hosted setup on top gluster with replica 3 + 1 arbiter. All good except one point:<br><br></div>The hosts have been configured with power management using IPMI (server iLO). <br></div>If I disconnect power from one host (say C) (or disconnect all network cables of the host) the two other hosts go to a loop where they try to verify the status of the host C by issuing power management commands to the host C. Since power of host is off the server iLO does not respond on the network and the power management of host C fails, leaving the VMs that were running on the host C in an unknown state and they are never restarted to the other hosts. <br><br></div>Is there any fencing option to change this behavior so as if both available hosts fail to do power management of the unresponsive host to decide that the host is down and to restart the VMs of that host to the other available hosts. <br></div></div></div></div></blockquote></div></div></div></div></blockquote><div><br></div></span><div>No, this is a bad assumption. Perhaps they are the ones isolated form it?</div><span class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-m_-106319445438494848m_2538821917102434284HOEnZb"><font color="#888888"><div>Y.</div><div> </div></font></span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><span><div class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-m_-106319445438494848m_2538821917102434284m_704387649189278140HOEnZb"><div class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-m_-106319445438494848m_2538821917102434284m_704387649189278140h5"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="m_-1812038979085810701m_-2636759949192880359m_7430829406581933640gmail-m_-106319445438494848m_2538821917102434284m_704387649189278140m_672168755452860295quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><br></div>I could also add additional power management through UPS to avoid this issue but this is not currently an option and I am interested to see if this behavior can be tweaked. <br><br></div>Thanx, <br></div>Alex<br></div>
</blockquote></div><br></div>
</div></div><br></span><span>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></span></blockquote></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div></div>