<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Oct 27, 2015 at 4:59 PM, <span dir="ltr"><<a href="mailto:nicolas@devels.es" target="_blank">nicolas@devels.es</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi,<br>
<br>
We're using ovirt 3.5.3.1, and as storage backend we use GlusterFS. We added a Storage Domain with the path "gluster.fqdn1:/volume", and as options, we used "backup-volfile-servers=gluster.fqdn2". We now need to restart both gluster.fqdn1 and gluster.fqdn2 machines due to system update (not at the same time, obviously). We're worried because in previous attempts, when restarted the main gluster node (gluster.fqdn1 in this case), all the VMs running against that storage backend got paused due to storage errors, and we couldn't resume them and finally had to power them off the hard way and start them again.<br>
<br>
Gluster version on gluster.fqdn1 and gluster.fqdn2 is 3.6.3-1.<br>
<br>
Gluster configuration for that volume is:<br>
<br>
Volume Name: volume<br>
Type: Replicate<br>
Volume ID: a2d7e52c-2f63-4e72-9635-4e311baae6ff<br>
Status: Started<br>
Number of Bricks: 1 x 2 = 2<br></blockquote><div><br></div><div>This is replica 2 - not supported.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Transport-type: tcp<br>
Bricks:<br>
Brick1: gluster.fqdn1:/gluster/brick_01/brick<br>
Brick2: gluster.fqdn2:/gluster/brick_01/brick<br>
Options Reconfigured:<br>
storage.owner-gid: 36<br>
storage.owner-uid: 36<br>
cluster.server-quorum-type: server<br>
cluster.quorum-type: none<br>
network.remote-dio: enable<br>
cluster.eager-lock: enable<br>
performance.stat-prefetch: off<br>
performance.io-cache: off<br>
performance.read-ahead: off<br>
performance.quick-read: off<br>
<br>
We would like to know if there's a "clean" way to do such a procedure. We know that pausing all the VMs and then restarting the gluster nodes work with no harm, but the downtime of the VMs is important to us and we would like to avoid it, especially when we have 2 gluster nodes for that.<br></blockquote><div><br></div><div>You should use replica 3. This configuration should be able to survive</div><div>reboot on of the nodes.</div><div><br></div><div>If two nodes are down, the file system will become readonly, and vms will</div><div>pause because of write errors.</div><div><br></div><div>Adding Sahina.</div><div><br></div><div>Nir</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Any hints are appreciated,<br>
<br>
Thanks.<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</blockquote></div><br></div></div>