<html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /></head><body style='font-size: 10pt; font-family: Verdana,Geneva,sans-serif'>
<p>Le 2015-10-12 14:04, Nir Soffer a écrit :</p>
<blockquote type="cite" style="padding: 0 0.4em; border-left: #1010ff 2px solid; margin: 0"><!-- html ignored --><!-- head ignored --><!-- meta ignored -->
<div class="pre" style="margin: 0; padding: 0; font-family: monospace">
<blockquote type="cite" style="padding: 0 0.4em; border-left: #1010ff 2px solid; margin: 0">On Mon, Oct 12, 2015 at 11:14 AM, Nico <<a href="mailto:gluster@distran.org">gluster@distran.org</a>> wrote:</blockquote>
<br /> Yes, engine will let you use such volume in 3.5 - this is a bug. In 3.6 you will<br /> not be able to use such setup.<br /> <br /> replica 2 fails in a very bad way when one brick is down; the<br /> application may get<br /> stale data, and this breaks sanlock. You will be get stuck with spm<br /> that cannot be<br /> stopped and other fun stuff.<br /> <br /> You don't want to go in this direction, and we will not be able to support that.<br /> </div>
</blockquote>
<p> </p>
<p>For the record, I already rebooted node1; and the node2 took over the existing VM from node 1 and vice-versa.</p>
<p>GlusterFS worked fine, oVirt application was still working fine .. i guess it is because it was a soft reboot which stops softly the services.</p>
<p>I got another case where i stuck the network on the 2 nodes simultaneously after a bad manipulation on oVirt GUI and i got a split brain.</p>
<p>i kept the error at this very moment:</p>
<p>root@devnix-virt-master02 nets]# gluster volume heal ovirt info split-brain<br />Brick devnix-virt-master01:/gluster/ovirt/<br />/d44ee4b0-8d36-467a-9610-c682a618b698/dom_md/ids<br />Number of entries in split-brain: 1</p>
<p>Brick devnix-virt-master02:/gluster/ovirt/<br />/d44ee4b0-8d36-467a-9610-c682a618b698/dom_md/ids<br />Number of entries in split-brain: 1</p>
<p> </p>
<p> </p>
<p>This file was having same size on both nodes; so it was hard to select one. Finally i chose the younger one and all was back online after the heal.</p>
<p>It is this kind of stuff you are talking about with 2 nodes ?</p>
<p> </p>
<p>For now, I don't have budget to take a third one; so i'm a bit stuck and disappointing.</p>
<p>I've a third device but for backup, it has lot of storage but low cpu abilities (no VT-X) so i can't use it as hypervisor.</p>
<p>I could maybe use it as a third brick but is it possible to have this kind of configuration ? 2 actives nodes as hypervisor and 1 third only for gluster replica 3 ?</p>
<p>Cheers</p>
<p>Nico</p>
<p> </p>
<p> </p>
<div> </div>
</body></html>