<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 14, 2016 at 10:35 AM, Nir Soffer <span dir="ltr"><<a href="mailto:nsoffer@redhat.com" target="_blank">nsoffer@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="gmail-">On Wed, Dec 14, 2016 at 9:02 AM, TranceWorldLogic .<br>
<<a href="mailto:tranceworldlogic@gmail.com">tranceworldlogic@gmail.com</a>> wrote:<br>
> Hi,<br>
><br>
> I was trying to explore more about fencing option supported in ovirt.<br>
> But getting lost in documents.<br>
><br>
> My requirement is to fence at VM level rather than host level.<br>
> e.g let assume VM1.1, VM1.2,VM1.3 are running on host1 and VM2.1,VM2.2 VM2.3<br>
> running on host2. Suppose due to some error only VM1.1 goes down [Note:<br>
> VM1.2 and VM1.3 in running state] then VM2.1 must come up.<br>
><br>
> Can I get to know whether such functionality is supported by ovirt ?<br>
> If yes, would you please explain also how it work<br>
> or would you share refernece for me to refer and understand it ?<br>
> if yes, is it configurable by python sdk ?<br>
<br>
</span>Fencing in the VM level is not available yet, but will be possible<br></blockquote><div><br></div><div>If the intention is to fence a VM because an external HA cluster controls it, this is possible using fence_rhevm[1], which is part of the fence-agents package.</div><div>Note that it currently has a bug[2] which means it's currently not working with 4.0 (but it's an easy fix).</div><div>Y.</div><div><br></div><div>[1] <a href="https://github.com/ClusterLabs/fence-agents/blob/master/fence/agents/rhevm/fence_rhevm.py">https://github.com/ClusterLabs/fence-agents/blob/master/fence/agents/rhevm/fence_rhevm.py</a></div><div>[2] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1402860">https://bugzilla.redhat.com/show_bug.cgi?id=1402860</a></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
once we have vm-leases:<br>
<a href="https://www.ovirt.org/develop/release-management/features/storage/vm-leases/" rel="noreferrer" target="_blank">https://www.ovirt.org/develop/<wbr>release-management/features/<wbr>storage/vm-leases/</a><br>
<br>
On top of this, we will support automatic failover of vms:<br>
<a href="https://github.com/oVirt/ovirt-site/pull/586/commits/77669161397ebf4cc15c66e0e6876bc033384cfc" rel="noreferrer" target="_blank">https://github.com/oVirt/<wbr>ovirt-site/pull/586/commits/<wbr>77669161397ebf4cc15c66e0e6876b<wbr>c033384cfc</a><br>
<br>
This should be available in 4.1, currently blocked by libvirt bug:<br>
<a href="https://bugzilla.redhat.com/1403691" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/<wbr>1403691</a><br>
<br>
Once a vm has a lease, you can revoke the lease from another host, causing<br>
sanlock on the original host to terminate the vm using sanlock_request() api.<br>
<br>
When this will be implemented, we can expose this via the REST API and the SDK.<br>
<br>
Nir<br>
<br>
><br>
> Thanks,<br>
> ~Rohit<br>
><br>
> ______________________________<wbr>_________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.phx.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.phx.ovirt.org/<wbr>mailman/listinfo/users</a><br>
><br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.phx.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.phx.ovirt.org/<wbr>mailman/listinfo/users</a><br>
</blockquote></div><br></div></div>