<div dir="ltr">hi Gilad/Martin,<div><br></div><div>Please let me know if you need any further clarifications from my side.</div><div><br></div><div>Thanks</div><div>Kausik</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Fri, Jun 27, 2014 at 3:18 PM, kausik pal <span dir="ltr"><<a href="mailto:kausikpal.1@gmail.com" target="_blank">kausikpal.1@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div>Hi Gilad/Martin,</div><div><br></div><div>Thanks for the reply.</div><div><br></div><div>Let me explain the scenario in details:-</div><div><br></div><div>Each Host would have local filesystems and can be integrated with oVirt as POSIX filesystems storage.</div>
<div><br></div><div>Assume each server can run maximum 6 VMs simultaneously(due to memory/CPU constraints) and we have configured 5 hosts (Node1, Node2....Node5).</div><div><br></div><div>In the beginning the 5 hosts are being configured in the oVirt engine and there is no VM.</div>
<div><br></div><div>Now I create the first VM named VM1 and place it in Node1, from this point oVirt Schedular would decide which other host it would put the replica on. This replica VM1 will be in passive state in another host (Node 2 as per diagram). The same procedure would apply on the other created VMs as well.</div>
<div><br></div><div>The role of the oVirt scheduler would be to optimize the placement of VMs in such a way that we can maximize the number of VMs availability in case of host(s) failure. Also it should replicate the delta updates to corresponding passive VMs.</div>
<div><br></div><div>As per the scenario in a perfect condition 4 VMs are running in each host providing scope for running 2 extra VMs(maximum 6)in case of other host failures.The underlying local storage have kept 4 extra VMs (QCOW2 disks and config files) from different hosts in addition to the 4 VM disk files for the running VMs.</div>
<div><br></div><div>Due to some reason Node2 and Node4 have failed, now the replicated passive VMs for the failed nodes will be started on the remaining nodes. In this case most of the VMs were able to start in different node except VM7 and VM14 as those VMs don't have replicas available in the other running nodes.</div>
<div><br></div><div>The maximum availability of the VMs can be calculated using the following formula:-</div><div class=""><div><br></div><div><b>Total No. of VMs running after node failure = ((Total No. of Nodes) - (No. of failed Nodes)) * (Max. No. of VMs that can run per Node)</b></div>
<div><br></div></div><div>The HA VM reservation is an excellent feature no doubt, but I think its only applicable if you have a shared storage underlying(NFS,SAN,Gluster etc..).(Correct me if I'm wrong)</div><div><br>
</div>
<div>And yes GlusterFS does have distributed replicated feature which can replicate VM data in multiple bricks, but in GlusterFS you have to map the replicated bricks during creation and the data can only be replicated between the two (or multiple based on your replication numbers) bricks only.</div>
<div><br></div><div>The advantage of this feature would be that you can add standalone nodes(odd/even numbers) in oVirt ecosystem and schedular can place VMs in such a balanced and optimized mode so that maximum number of VMs remain available after N nodes failure from M number of nodes(where M>N)</div>
<div><br></div><div>Please put your valuable thoughts regarding the same.</div><div><br></div><div>Request you to let me know if you need any further clarifications from my side.</div><div><br></div><div>Thanks</div><span class="HOEnZb"><font color="#888888"><div>
<br></div><div>Kausik</div><div><br></div><div> </div><div><br></div></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Jun 26, 2014 at 6:23 PM, Gilad Chaplik <span dir="ltr"><<a href="mailto:gchaplik@redhat.com" target="_blank">gchaplik@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Kausik,<br>
<br>
If I understand correctly your question, we do support this flow.<br>
In case of failure we migrate highly available VMs to other hosts, but note that there could be a connectivity problem between engine an node, and node can still communicate with the storage,<br>
so to avoid that (split brain) you need to have PM configured for that node, or manually confirm that the host has rebooted.<br>
<br>
We even added a feature lately (3.4's HA reservation [1]) that indicates whether your HA VMs have enough resources in cluster in case of a node failure.<br>
<br>
Thanks,<br>
Gilad.<br>
<br>
[1] <a href="http://www.ovirt.org/Features/HA_VM_reservation" target="_blank">http://www.ovirt.org/Features/HA_VM_reservation</a><br>
<div><div><br>
----- Original Message -----<br>
> From: "kausik pal" <<a href="mailto:kausikpal.1@gmail.com" target="_blank">kausikpal.1@gmail.com</a>><br>
> To: <a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a><br>
> Sent: Tuesday, June 24, 2014 8:06:10 PM<br>
> Subject: [ovirt-users] Is this kind of VM replication possible using oVirt Schedular<br>
><br>
> Hi All,<br>
><br>
> PFA the two diagrams describing placement of the VMs during normal condition<br>
> and condition after two node failure in 5 node set up (Each Node can host up<br>
> to 6 VMs due to memory constraints).<br>
><br>
> My question is whether this kind of VM replication as shown in the attached<br>
> diagram is possible utilizing oVirt schedular.<br>
><br>
> The benefits of this kind of replication would be following :-<br>
><br>
> 1. Any number of Hosts/Nodes can participate (Odd/Even) in the<br>
> infrastructure.<br>
><br>
> 2. We can add any number (odd/even) number of Nodes to re-balance the VM<br>
> placement.<br>
><br>
> 3. After node failures the Maximum number of VMs that can run on the<br>
> remaining nodes can be calculated with the following formula:-<br>
><br>
> Total No. of VMs running after node failure = ((Total No. of Nodes) - (No. of<br>
> failed Nodes)) * (Max. No. of VMs that can run per Node)<br>
><br>
> The attached table gives a rough calculation on maximum number VMs that can<br>
> run after node failures.<br>
><br>
><br>
> Please let me know if you need any further information from my side.<br>
><br>
> Thanks<br>
><br>
> Kausik<br>
><br>
</div></div><div><div>> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>