<div dir="ltr">I can confirm that I did set it up manually, and I did specify backupvol, and in the "manage domain" storage settings, I do have under mount options, backup-volfile-servers=192.168.8.12:192.168.8.13 (and this was done at initial install time).<div><br></div><div>The "used managed gluster" checkbox is NOT checked, and if I check it and save settings, next time I go in it is not checked.</div><div><br></div><div>--Jim</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 2:08 PM, Charles Kozler <span dir="ltr"><<a href="mailto:ckozleriii@gmail.com" target="_blank">ckozleriii@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">@ Jim - here is my setup which I will test in a few (brand new cluster) and report back what I found in my tests<div><br></div><div>- 3x servers direct connected via 10Gb</div><div>- 2 of those 3 setup in ovirt as hosts</div><div>- Hosted engine</div><div>- Gluster replica 3 (no arbiter) for all volumes</div><div>- 1x engine volume gluster replica 3 manually configured (not using ovirt managed gluster)</div><div>- 1x datatest volume (20gb) replica 3 manually configured (not using ovirt managed gluster)</div><div>- 1x nfstest domain served from some other server in my infrastructure which, at the time of my original testing, was master domain</div><div><br></div><div>I tested this earlier and all VMs stayed online. However, ovirt cluster reported DC/cluster down, all VM's stayed up</div><div><br></div><div>As I am now typing this, can you confirm you setup your gluster storage domain with backupvol? Also, confirm you updated hosted-engine.conf with backupvol mount option as well?</div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 4:22 PM, Jim Kusznir <span dir="ltr"><<a href="mailto:jim@palousetech.com" target="_blank">jim@palousetech.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">So, after reading the first document twice and the 2nd link thoroughly once, I believe that the arbitrator volume should be sufficient and count for replica / split brain. EG, if any one full replica is down, and the arbitrator and the other replica is up, then it should have quorum and all should be good.<div><br></div><div>I think my underlying problem has to do more with config than the replica state. That said, I did size the drive on my 3rd node planning to have an identical copy of all data on it, so I'm still not opposed to making it a full replica.</div><div><br></div><div>Did I miss something here?</div><div><br></div><div>Thanks!</div></div><div class="m_-5430270818986214204HOEnZb"><div class="m_-5430270818986214204h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 11:59 AM, Charles Kozler <span dir="ltr"><<a href="mailto:ckozleriii@gmail.com" target="_blank">ckozleriii@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>These can get a little confusing but this explains it best: <a href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#replica-2-and-replica-3-volumes" target="_blank">https://gluster.readthed<wbr>ocs.io/en/latest/Administrator<wbr>%20Guide/arbiter-volumes-and-q<wbr>uorum/#replica-2-and-replica-3<wbr>-volumes</a></div><div><br></div><div>Basically in the first paragraph they are explaining why you cant have HA with quorum for 2 nodes. Here is another overview doc that explains some more</div><div><br></div><div><a href="http://openmymind.net/Does-My-Replica-Set-Need-An-Arbiter/" target="_blank">http://openmymind.net/Does-My-<wbr>Replica-Set-Need-An-Arbiter/</a></div><div><br></div>From my understanding arbiter is good for resolving split brains. Quorum and arbiter are two different things though quorum is a mechanism to help you **avoid** split brain and the arbiter is to help gluster resolve split brain by voting and other internal mechanics (as outlined in link 1). How did you create the volume exactly - what command? It looks to me like you created it with 'gluster volume create replica 2 arbiter 1 {....}' per your earlier mention of "replica 2 arbiter 1". That being said, if you did that and then setup quorum in the volume configuration, this would cause your gluster to halt up since quorum was lost (as you saw until you recovered node 1)<br><div><br></div><div>As you can see from the docs, there is still a corner case for getting in to split brain with replica 3, which again, is where arbiter would help gluster resolve it</div><div><br></div><div>I need to amend my previous statement: I was told that arbiter volume does not store data, only metadata. I cannot find anything in the docs backing this up however it would make sense for it to be. That being said, in my setup, I would not include my arbiter or my third node in my ovirt VM cluster component. I would keep it completely separate</div><div><br></div></div><div class="m_-5430270818986214204m_-8688080050889145984HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 2:46 PM, Jim Kusznir <span dir="ltr"><<a href="mailto:jim@palousetech.com" target="_blank">jim@palousetech.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I'm now also confused as to what the point of an arbiter is / what it does / why one would use it.</div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 11:44 AM, Jim Kusznir <span dir="ltr"><<a href="mailto:jim@palousetech.com" target="_blank">jim@palousetech.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks for the help!<div><br></div><div>Here's my gluster volume info for the data export/brick (I have 3: data, engine, and iso, but they're all configured the same):</div><div><br></div><div><div>Volume Name: data</div><div>Type: Replicate</div><div>Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e<wbr>42cc59</div><div>Status: Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x (2 + 1) = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: ovirt1.nwfiber.com:/gluster/br<wbr>ick2/data</div><div>Brick2: ovirt2.nwfiber.com:/gluster/br<wbr>ick2/data</div><div>Brick3: ovirt3.nwfiber.com:/gluster/br<wbr>ick2/data (arbiter)</div><div>Options Reconfigured:</div><div>performance.strict-o-direct: on</div><div>nfs.disable: on</div><div>user.cifs: off</div><div>network.ping-timeout: 30</div><div>cluster.shd-max-threads: 8</div><div>cluster.shd-wait-qlength: 10000</div><div>cluster.locking-scheme: granular</div><div>cluster.data-self-heal-algorit<wbr>hm: full</div><div>performance.low-prio-threads: 32</div><div>features.shard-block-size: 512MB</div><div>features.shard: on</div><div>storage.owner-gid: 36</div><div>storage.owner-uid: 36</div><div>cluster.server-quorum-type: server</div><div>cluster.quorum-type: auto</div><div>network.remote-dio: enable</div><div>cluster.eager-lock: enable</div><div>performance.stat-prefetch: off</div><div>performance.io-cache: off</div><div>performance.read-ahead: off</div><div>performance.quick-read: off</div><div>performance.readdir-ahead: on</div><div>server.allow-insecure: on</div><div>[root@ovirt1 ~]#</div></div><div><br></div><div><br></div><div>all 3 of my brick nodes ARE also members of the virtualization cluster (including ovirt3). How can I convert it into a full replica instead of just an arbiter?</div><div><br></div><div>Thanks!</div><span class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338HOEnZb"><font color="#888888"><div>--Jim</div></font></span></div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 9:09 AM, Charles Kozler <span dir="ltr"><<a href="mailto:ckozleriii@gmail.com" target="_blank">ckozleriii@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">@Kasturi - Looks good now. Cluster showed down for a moment but VM's stayed up in their appropriate places. Thanks!<div><br></div><div>< Anyone on this list please feel free to correct my response to Jim if its wrong> </div><div><br></div><div>@ Jim - If you can share your gluster volume info / status I can confirm (to the best of my knowledge). From my understanding, If you setup the volume with something like 'gluster volume set <vol> group virt' this will configure some quorum options as well, Ex: <a href="http://i.imgur.com/Mya4N5o.png" target="_blank">http://i.imgur.com/Mya4N5o<wbr>.png</a></div><div><br></div><div>While, yes, you are configured for arbiter node you're still losing quorum by dropping from 2 -> 1. You would need 4 node with 1 being arbiter to configure quorum which is in effect 3 writable nodes and 1 arbiter. If one gluster node drops, you still have 2 up. Although in this case, you probably wouldnt need arbiter at all</div><div><br></div><div>If you are configured, you can drop quorum settings and just let arbiter run since you're not using arbiter node in your VM cluster part (I believe), just storage cluster part. When using quorum, you need > 50% of the cluster being up at one time. Since you have 3 nodes with 1 arbiter, you're actually losing 1/2 which == 50 which == degraded / hindered gluster </div><div> </div><div>Again, this is to the best of my knowledge based on other quorum backed software....and this is what I understand from testing with gluster and ovirt thus far</div></div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 11:53 AM, Jim Kusznir <span dir="ltr"><<a href="mailto:jim@palousetech.com" target="_blank">jim@palousetech.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Huh...Ok., how do I convert the arbitrar to full replica, then? I was misinformed when I created this setup. I thought the arbitrator held enough metadata that it could validate or refudiate any one replica (kinda like the parity drive for a RAID-4 array). I was also under the impression that one replica + Arbitrator is enough to keep the array online and functional.<span class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637HOEnZb"><font color="#888888"><div><br></div><div>--Jim</div></font></span></div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 5:22 AM, Charles Kozler <span dir="ltr"><<a href="mailto:ckozleriii@gmail.com" target="_blank">ckozleriii@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">@ Jim - you have only two data volumes and lost quorum. Arbitrator only stores metadata, no actual files. So yes, you were running in degraded mode so some operations were hindered.<div><br></div><div>@ Sahina - Yes, this actually worked fine for me once I did that. However, the issue I am still facing, is when I go to create a new gluster storage domain (replica 3, hyperconverged) and I tell it "Host to use" and I select that host. If I fail that host, all VMs halt. I do not recall this in 3.6 or early 4.0. This to me makes it seem like this is "pinning" a node to a volume and vice versa like you could, for instance, for a singular hyperconverged to ex: export a local disk via NFS and then mount it via ovirt domain. But of course, this has its caveats. To that end, I am using gluster replica 3, when configuring it I say "host to use: " node 1, then in the connection details I give it node1:/data. I fail node1, all VMs halt. Did I miss something?</div></div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 2:13 AM, Sahina Bose <span dir="ltr"><<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>To the OP question, when you set up a gluster storage domain, you need to specify backup-volfile-servers=<server<wbr>2>:<server3> where server2 and server3 also have bricks running. When server1 is down, and the volume is mounted again - server2 or server3 are queried to get the gluster volfiles.<br><br></div>@Jim, if this does not work, are you using 4.1.5 build with libgfapi access? If not, please provide the vdsm and gluster mount logs to analyse<br><br></div>If VMs go to paused state - this could mean the storage is not available. You can check "gluster volume status <volname>" to see if atleast 2 bricks are running.<br></div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 11:31 AM, Johan Bernhardsson <span dir="ltr"><<a href="mailto:johan@kafit.se" target="_blank">johan@kafit.se</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>If gluster drops in quorum so that it has less votes than it should it will stop file operations until quorum is back to normal.If i rember it right you need two bricks to write for quorum to be met and that the arbiter only is a vote to avoid split brain.</div><div><br></div><div><br></div><div>Basically what you have is a raid5 solution without a spare. And when one disk dies it will run in degraded mode. And some raid systems will stop the raid until you have removed the disk or forced it to run anyway. </div><div><br></div><div>You can read up on it here: <a href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/" target="_blank">https://gluster.readthed<wbr>ocs.io/en/latest/Administrator<wbr>%20Guide/arbiter-volumes-and-q<wbr>uorum/</a></div><span class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390HOEnZb"><font color="#888888"><div><br></div><div>/Johan</div></font></span><div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390h5"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250-x-evo-paragraph m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250-x-evo-top-signature-spacer"><br></div><div>On Thu, 2017-08-31 at 22:33 -0700, Jim Kusznir wrote:</div><blockquote type="cite"><div dir="ltr">Hi all: <div><br></div><div>Sorry to hijack the thread, but I was about to start essentially the same thread.</div><div><br></div><div>I have a 3 node cluster, all three are hosts and gluster nodes (replica 2 + arbitrar). I DO have the mnt_options=backup-volfile-ser<wbr>vers= set:</div><div><br></div><div><div>storage=192.168.8.11:/engine</div><div>mnt_options=backup-volfile-ser<wbr>vers=192.168.8.12:192.168.8.13</div></div><div><br></div><div>I had an issue today where 192.168.8.11 went down. ALL VMs immediately paused, including the engine (all VMs were running on host2:192.168.8.12). I couldn't get any gluster stuff working until host1 (192.168.8.11) was restored.</div><div><br></div><div>What's wrong / what did I miss?</div><div><br></div><div>(this was set up "manually" through the article on setting up self-hosted gluster cluster back when 4.0 was new..I've upgraded it to 4.1 since).</div><div><br></div><div>Thanks!</div><div>--Jim</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 31, 2017 at 12:31 PM, Charles Kozler <span dir="ltr"><<a href="mailto:ckozleriii@gmail.com" target="_blank">ckozleriii@gmail.com</a>></span> wrote:<br><blockquote type="cite"><div dir="ltr">Typo..."Set it up and then failed that **HOST**"<div><br></div><div>And upon that host going down, the storage domain went down. I only have hosted storage domain and this new one - is this why the DC went down and no SPM could be elected?</div><div><br></div><div>I dont recall this working this way in early 4.0 or 3.6</div></div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 31, 2017 at 3:30 PM, Charles Kozler <span dir="ltr"><<a href="mailto:ckozleriii@gmail.com" target="_blank">ckozleriii@gmail.com</a>></span> wrote:<br><blockquote type="cite"><div dir="ltr">So I've tested this today and I failed a node. Specifically, I setup a glusterfs domain and selected "host to use: node1". Set it up and then failed that VM<div><br></div><div>However, this did not work and the datacenter went down. My engine stayed up, however, it seems configuring a domain to pin to a host to use will obviously cause it to fail</div><div><br></div><div>This seems counter-intuitive to the point of glusterfs or any redundant storage. If a single host has to be tied to its function, this introduces a single point of failure</div><div><br></div><div>Am I missing something obvious?</div></div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250m_-6021655538959603885HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250m_-6021655538959603885h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 31, 2017 at 9:43 AM, Kasturi Narra <span dir="ltr"><<a href="mailto:knarra@redhat.com" target="_blank">knarra@redhat.com</a>></span> wrote:<br><blockquote type="cite"><div dir="ltr">yes, right. What you can do is edit the hosted-engine.conf file and there is a parameter as shown below [1] and replace h2 and h3 with your second and third storage servers. Then you will need to restart ovirt-ha-agent and ovirt-ha-broker services in all the nodes .<div><br></div><div>[1] 'mnt_options=backup-volfile-se<wbr>rvers=<h2>:<h3>' </div></div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250m_-6021655538959603885m_5951134109970997349HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250m_-6021655538959603885m_5951134109970997349h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 31, 2017 at 5:54 PM, Charles Kozler <span dir="ltr"><<a href="mailto:ckozleriii@gmail.com" target="_blank">ckozleriii@gmail.com</a>></span> wrote:<br><blockquote type="cite"><div dir="ltr">Hi Kasturi -<div><br></div><div>Thanks for feedback</div><span><div><br></div><div>> <span style="font-size:12.8px">If cockpit+gdeploy plugin would be have been used then that would have automatically detected glusterfs replica 3 volume created during Hosted Engine deployment and this question would not have been asked</span></div><div><span style="font-size:12.8px"> </span></div></span><div><span style="font-size:12.8px">Actually, doing hosted-engine --deploy it too also auto detects glusterfs. I know glusterfs fuse client has the ability to failover between all nodes in cluster, but I am still curious given the fact that I see in ovirt config node1:/engine (being node1 I set it to in hosted-engine --deploy). So my concern was to ensure and find out exactly how engine works when one node goes away and the fuse client moves over to the other node in the gluster cluster</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">But you did somewhat answer my question, the answer seems to be no (as default) and I will have to use hosted-engine.conf and change the parameter as you list</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">So I need to do something manual to create HA for engine on gluster? Yes?</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Thanks so much!</span></div></div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250m_-6021655538959603885m_5951134109970997349m_3449479715428376713HOEnZb"><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250m_-6021655538959603885m_5951134109970997349m_3449479715428376713h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 31, 2017 at 3:03 AM, Kasturi Narra <span dir="ltr"><<a href="mailto:knarra@redhat.com" target="_blank">knarra@redhat.com</a>></span> wrote:<br><blockquote type="cite"><div dir="ltr">Hi,<div><br></div><div> During Hosted Engine setup question about glusterfs volume is being asked because you have setup the volumes yourself. If cockpit+gdeploy plugin would be have been used then that would have automatically detected glusterfs replica 3 volume created during Hosted Engine deployment and this question would not have been asked.</div><div><br></div><div> During new storage domain creation when glusterfs is selected there is a feature called 'use managed gluster volumes' and upon checking this all glusterfs volumes managed will be listed and you could choose the volume of your choice from the dropdown list.</div><div><br></div><div> There is a conf file called /etc/hosted-engine/hosted-engi<wbr>ne.conf where there is a parameter called backup-volfile-servers="h1:h2" and if one of the gluster node goes down engine uses this parameter to provide ha / failover. </div><div><br></div><div> Hope this helps !!</div><div><br></div><div>Thanks</div><div>kasturi</div><div><br><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250m_-6021655538959603885m_5951134109970997349m_3449479715428376713m_-614118149965673531h5">On Wed, Aug 30, 2017 at 8:09 PM, Charles Kozler <span dir="ltr"><<a href="mailto:ckozleriii@gmail.com" target="_blank">ckozleriii@gmail.com</a>></span> wrote:<br></div></div><blockquote type="cite"><div><div class="m_-5430270818986214204m_-8688080050889145984m_-7392367026090838180m_7205832786860019338m_-5434233865126313778m_2102924069682492637m_8808093401255640080m_688166259021271601m_8337997229390174390m_-8986942953698210250m_-6021655538959603885m_5951134109970997349m_3449479715428376713m_-614118149965673531h5"><div dir="ltr">Hello -<div><br></div><div>I have successfully created a hyperconverged hosted engine setup consisting of 3 nodes - 2 for VM's and the third purely for storage. I manually configured it all, did not use ovirt node or anything. Built the gluster volumes myself</div><div><br></div><div>However, I noticed that when setting up the hosted engine and even when adding a new storage domain with glusterfs type, it still asks for hostname:/volumename</div><div><br></div><div>This leads me to believe that if that one node goes down (ex: node1:/data), then ovirt engine wont be able to communicate with that volume because its trying to reach it on node 1 and thus, go down</div><div><br></div><div>I know glusterfs fuse client can connect to all nodes to provide failover/ha but how does the engine handle this?</div></div>
<br></div></div>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
<br></blockquote></div><br></div>
</div></div><br></blockquote></div><br></div>
</div></div><br></blockquote></div><br></div>
</div></div><br></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
<pre>______________________________<wbr>_________________
Users mailing list
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a>
</pre></blockquote></div></div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div><br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>