<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jan 1, 2018 at 12:50 AM, Andrei V <span dir="ltr"><<a href="mailto:andreil1@starlett.lv" target="_blank">andreil1@starlett.lv</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi !<br>
<br>
I'm installing 2-node failover cluster (2 x Xeon servers with local RAID<br>
5 / ext4 for oVirt storage domains).<br>
Now I have a dilemma - use either GlusterFS replica 2 or stick with NFS?<br></blockquote><div><br></div><div>Replica 2 is not good enough, as it can leave you with split brain. It's been discussed in the mailing list several times.</div><div>How do you plan to achieve HA with NFS? With drbd?</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
4.2 Engine is running on separate hardware.<br></blockquote><div><br></div><div>Is the Engine also highly available?</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Each node have its own storage domain (on internal RAID).<br></blockquote><div><br></div><div>So some sort of replica 1 with geo-replication between them?</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
All VMs must be highly available.<br></blockquote><div><br></div><div>Without shared storage, it may be tricky.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
One of the VMs is an accounting/stock control system with FireBird SQL<br>
server on CentOS is speed-critical.<br></blockquote><div><br></div><div>But is IO the bottleneck? Are you using SSDs / NVMe drives? </div><div>I'm not familiar enough with FireBird SQL server - does it have an application layer replication you might opt to use?</div><div>In such case, you could pass-through a NVM disk and have the application layer perform the replication between the nodes.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
No load balancing between nodes necessary. 2nd is just for backup if 1st<br>
for whatever reason goes up in smoke. All VM disks must be replicated to<br>
backup node in near real-time or in worst case each 1 - 2 hour.<br>
GlusterFS solves this issue yet at high performance penalty.<br></blockquote><div><br></div><div>The problem with a passive backup is that you never know it'll really work when needed. This is why active-active is many time preferred.</div><div>It's also more cost effective usually - instead of some HW lying around.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
>From what I read here<br>
<a href="http://lists.ovirt.org/pipermail/users/2017-July/083144.html" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>pipermail/users/2017-July/<wbr>083144.html</a><br>
GlusterFS performance with oVirt is not very good right now because QEMU<br>
uses FUSE instead of libgfapi.<br>
<br>
What is optimal way to go on ?<br></blockquote><div><br></div><div>It's hard to answer without additional details.</div><div>Y.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thanks in advance.<br>
Andrei<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
</blockquote></div><br></div></div>