<html><head></head><body><div>For it to work you need to have the bricks in replicate. Of brick on each server. </div><div><br></div><div>If you only have two nodes. The quoum will be to low so it will set the gluster to failsafe mode until the other brick comes online. </div><div><br></div><div>For it to work properly you need three nodes with one brick or two nodes and a third node acting as an arbiter.</div><div><br></div><div>/Johan</div><div class="-x-evo-paragraph -x-evo-top-signature-spacer"><br></div><div>On Thu, 2017-11-09 at 11:35 +0100, Jon bae wrote:</div><blockquote type="cite"><div dir="ltr"><div><div><div>Hello,<br></div>I'm very new to oVirt and glusterFS, so maybe I got something wrong...<br><br></div>I have the oVirt engine installed on a separate server and I have also two physical nodes. On every node I configure glusterFS, the volume is in distribution mode and have only one brick, from is one node. Both volumes I also add to its own storage domain.</div><div><br></div><div>The idea was, that both storage domains are independent from each other, that I can turn of one node and only turn it on, when I need it.</div><div><br></div><div>But now I have the problem, that when I turn of on node, both storage domains goes down. and the volume shows the the brick is not available.</div><div><br></div><div>Is there a way to fix this?<br></div><div><br></div><div>Regards</div><div>Jonathan</div><div><br></div><div><br></div></div>
<pre>_______________________________________________
Users mailing list
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre></blockquote></body></html>