<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Jul 4, 2017 at 2:57 PM, Gianluca Cecchi <span dir="ltr">&lt;<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><span class="gmail-"></span><div class="gmail_extra"><span class="gmail-"><span class="gmail-m_788177822560164983gmail-im" style="font-size:12.8px"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-m_788177822560164983gmail-m_-7846791724459284980gmail-"><div><br></div></span><div>No, it&#39;s not. One option is to update glusterfs packages to 3.10.<br></div></div></div></div></blockquote><div><br></div></span></span><span class="gmail-"><div style="font-size:12.8px">Is it supported throughout oVirt to use CentOS Storage SIG packages instead of ovirt provided ones? I imagine you mean it, correct?</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">If this is a case, would I have to go with Gluster 3.9 (non LTS)</div><div style="font-size:12.8px"><a href="https://lists.centos.org/pipermail/centos-announce/2017-January/022249.html" target="_blank">https://lists.centos.org/piper<wbr>mail/centos-announce/2017-<wbr>January/022249.html</a><br></div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">Or Gluster 3.10 (LTS)</div><div style="font-size:12.8px"><a href="https://lists.centos.org/pipermail/centos-announce/2017-March/022337.html" target="_blank">https://lists.centos.org/piper<wbr>mail/centos-announce/2017-<wbr>March/022337.html</a><br></div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">I suppose the latter...</div><div style="font-size:12.8px">Any problem then with updates of oVirt itself, eg going through 4.1.2 to 4.1.3?</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">Thanks</div><div style="font-size:12.8px">Gianluca</div></span></div><span class="gmail-m_788177822560164983gmail-m_-7846791724459284980gmail-"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><br><div>Is 3.9 version of Gluster packages provided when updating to upcoming 4.1.3, perhaps?</div></div></div></div></blockquote></span></div></blockquote><div><br></div><div>Never mind, I will verify. At the end this is a test system.<br></div><div>I put the nodes in maintenance one by one and then installed glusterfs 3.10 with;<br><br>yum install centos-release-gluster<br></div><div>yum update<br><br></div><div>All were able to self heal then and I see the 4 storage domains (engine, data, iso, export) up and running.<br></div><div>See some notes at the end of the e-mail.<br></div><div>Now I&#39;m ready to test the change of gluster network traffic.</div><br></div><div class="gmail_quote">In my case the current hostnames that are also matching the ovirtmgmt network are <a href="http://ovirt0N.localdomain.com">ovirt0N.localdomain.com</a> with N=1,2,3<br><br></div><div class="gmail_quote">On my vlan2, defined as gluster network role in the cluster, I have defined (on each node /etc/hosts file) the hostnames <br><br>10.10.2.102 gl01.localdomain.local gl01<br>10.10.2.103 gl02.localdomain.local gl02<br>10.10.2.104 gl03.localdomain.local gl03<br><br></div><div class="gmail_quote">I need more details about command to run:<br><br></div><div class="gmail_quote">Currently I have<br><br>[root@ovirt03 ~]# gluster peer status<br>Number of Peers: 2<br><br>Hostname: ovirt01.localdomain.local<br>Uuid: e9717281-a356-42aa-a579-a4647a29a0bc<br>State: Peer in Cluster (Connected)<br>Other names:<br>10.10.2.102<br><br>Hostname: ovirt02.localdomain.local<br>Uuid: b89311fe-257f-4e44-8e15-9bff6245d689<br>State: Peer in Cluster (Connected)<br>Other names:<br>10.10.2.103<br><br></div><div class="gmail_quote">Suppose I start form export volume, that has these info:<br><br>[root@ovirt03 ~]# gluster volume info export<br> <br>Volume Name: export<br>Type: Replicate<br>Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (2 + 1) = 3<br>Transport-type: tcp<br>Bricks:<br>Brick1: ovirt01.localdomain.local:/gluster/brick3/export<br>Brick2: ovirt02.localdomain.local:/gluster/brick3/export<br>Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter)<br>...<br><br>then the commands I need to run would be:<br><br>gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export start<br>gluster volume reset-brick export ovirt01.localdomain.local:/gluster/brick3/export gl01.localdomain.local:/gluster/brick3/export commit force<br><br></div><div class="gmail_quote">Correct?<br><br></div><div class="gmail_quote">Is it sufficient to run it on a single node? And then on the same node, to run also for the other bricks of the same volume:<br><br>gluster volume reset-brick export ovirt02.localdomain.local:/gluster/brick3/export start<br>gluster
 volume reset-brick export 
ovirt02.localdomain.local:/gluster/brick3/export 
gl02.localdomain.local:/gluster/brick3/export commit force<br><br></div><div class="gmail_quote">and<br></div><div class="gmail_quote"><br>gluster volume reset-brick export ovirt03.localdomain.local:/gluster/brick3/export start<br>gluster
 volume reset-brick export 
ovirt03.localdomain.local:/gluster/brick3/export 
gl03.localdomain.local:/gluster/brick3/export commit force</div><div class="gmail_quote"><br></div><div class="gmail_quote">Correct? Do I have to wait self-heal after each commit command, before proceeding with the other ones?<br></div><div class="gmail_quote"><br></div><div class="gmail_quote">Thanks in advance for input so that I can test it.<br><br></div><div class="gmail_quote">Gianluca<br><br><br></div><div class="gmail_quote">NOTE: during the update of gluster packages from 3.8 to 3.10 I got these:<br></div><div class="gmail_quote"><br>warning: /var/lib/glusterd/vols/engine/engine.ovirt01.localdomain.local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/engine.ovirt01.localdomain.local.gluster-brick1-engine.vol.rpmsave<br>warning: /var/lib/glusterd/vols/engine/engine.ovirt02.localdomain.local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/engine.ovirt02.localdomain.local.gluster-brick1-engine.vol.rpmsave<br>warning: /var/lib/glusterd/vols/engine/engine.ovirt03.localdomain.local.gluster-brick1-engine.vol saved as /var/lib/glusterd/vols/engine/engine.ovirt03.localdomain.local.gluster-brick1-engine.vol.rpmsave<br>warning: /var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol saved as /var/lib/glusterd/vols/engine/trusted-engine.tcp-fuse.vol.rpmsave<br>warning: /var/lib/glusterd/vols/engine/engine.tcp-fuse.vol saved as /var/lib/glusterd/vols/engine/engine.tcp-fuse.vol.rpmsave<br>warning: /var/lib/glusterd/vols/data/data.ovirt01.localdomain.local.gluster-brick2-data.vol saved as /var/lib/glusterd/vols/data/data.ovirt01.localdomain.local.gluster-brick2-data.vol.rpmsave<br>warning: /var/lib/glusterd/vols/data/data.ovirt02.localdomain.local.gluster-brick2-data.vol saved as /var/lib/glusterd/vols/data/data.ovirt02.localdomain.local.gluster-brick2-data.vol.rpmsave<br>warning: /var/lib/glusterd/vols/data/data.ovirt03.localdomain.local.gluster-brick2-data.vol saved as /var/lib/glusterd/vols/data/data.ovirt03.localdomain.local.gluster-brick2-data.vol.rpmsave<br>warning: /var/lib/glusterd/vols/data/trusted-data.tcp-fuse.vol saved as /var/lib/glusterd/vols/data/trusted-data.tcp-fuse.vol.rpmsave<br>warning: /var/lib/glusterd/vols/data/data.tcp-fuse.vol saved as /var/lib/glusterd/vols/data/data.tcp-fuse.vol.rpmsave<br>warning: /var/lib/glusterd/vols/export/export.ovirt01.localdomain.local.gluster-brick3-export.vol saved as /var/lib/glusterd/vols/export/export.ovirt01.localdomain.local.gluster-brick3-export.vol.rpmsave<br>warning: /var/lib/glusterd/vols/export/export.ovirt02.localdomain.local.gluster-brick3-export.vol saved as /var/lib/glusterd/vols/export/export.ovirt02.localdomain.local.gluster-brick3-export.vol.rpmsave<br>warning: /var/lib/glusterd/vols/export/export.ovirt03.localdomain.local.gluster-brick3-export.vol saved as /var/lib/glusterd/vols/export/export.ovirt03.localdomain.local.gluster-brick3-export.vol.rpmsave<br>warning: /var/lib/glusterd/vols/export/trusted-export.tcp-fuse.vol saved as /var/lib/glusterd/vols/export/trusted-export.tcp-fuse.vol.rpmsave<br>warning: /var/lib/glusterd/vols/export/export.tcp-fuse.vol saved as /var/lib/glusterd/vols/export/export.tcp-fuse.vol.rpmsave<br>warning: /var/lib/glusterd/vols/iso/iso.ovirt01.localdomain.local.gluster-brick4-iso.vol saved as /var/lib/glusterd/vols/iso/iso.ovirt01.localdomain.local.gluster-brick4-iso.vol.rpmsave<br>warning: /var/lib/glusterd/vols/iso/iso.ovirt02.localdomain.local.gluster-brick4-iso.vol saved as /var/lib/glusterd/vols/iso/iso.ovirt02.localdomain.local.gluster-brick4-iso.vol.rpmsave<br>warning: /var/lib/glusterd/vols/iso/iso.ovirt03.localdomain.local.gluster-brick4-iso.vol saved as /var/lib/glusterd/vols/iso/iso.ovirt03.localdomain.local.gluster-brick4-iso.vol.rpmsave<br>warning: /var/lib/glusterd/vols/iso/trusted-iso.tcp-fuse.vol saved as /var/lib/glusterd/vols/iso/trusted-iso.tcp-fuse.vol.rpmsave<br>warning: /var/lib/glusterd/vols/iso/iso.tcp-fuse.vol saved as /var/lib/glusterd/vols/iso/iso.tcp-fuse.vol.rpmsave<br>  Installing : python2-gluster-3.10.3-1.el7.x86_64                                                             9/20 <br>  Installing : python-prettytable-0.7.2-2.el7.centos.noarch                                                   10/20 <br>  Updating   : glusterfs-geo-replication-3.10.3-1.el7.x86_64                                                  11/20 <br>Warning: glusterd.service changed on disk. Run &#39;systemctl daemon-reload&#39; to reload units.<br>  Cleanup    : glusterfs-geo-replication-3.8.13-1.el7.x86_64                                                  12/20 <br>Warning: glusterd.service changed on disk. Run &#39;systemctl daemon-reload&#39; to reload units.<br><br></div><div class="gmail_quote">For each volume the differences were these:<br></div><div class="gmail_quote"><br>[root@ovirt02 engine]# diff engine.ovirt01.localdomain.local.gluster-brick1-engine.vol engine.ovirt01.localdomain.local.gluster-brick1-engine.vol.rpmsave<br>19,20c19,20<br>&lt;     option sql-db-wal-autocheckpoint 25000<br>&lt;     option sql-db-cachesize 12500<br>---<br>&gt;     option sql-db-wal-autocheckpoint 1000<br>&gt;     option sql-db-cachesize 1000<br>127c127<br>&lt; volume engine-io-stats<br>---<br>&gt; volume /gluster/brick1/engine<br>132d131<br>&lt;     option unique-id /gluster/brick1/engine<br>136c135<br>&lt; volume /gluster/brick1/engine<br>---<br>&gt; volume engine-decompounder<br>138c137<br>&lt;     subvolumes engine-io-stats<br>---<br>&gt;     subvolumes /gluster/brick1/engine<br>149c148<br>&lt;     subvolumes /gluster/brick1/engine<br>---<br>&gt;     subvolumes engine-decompounder<br>[root@ovirt02 engine]# <br><br><br>[root@ovirt02 engine]# diff trusted-engine.tcp-fuse.vol trusted-engine.tcp-fuse.vol.rpmsave <br>39d38<br>&lt;     option use-compound-fops off<br>70,72d68<br>&lt;     option rda-cache-limit 10MB<br>&lt;     option rda-request-size 131072<br>&lt;     option parallel-readdir off<br>[root@ovirt02 engine]# <br><br><br><br>[root@ovirt02 engine]# diff engine.tcp-fuse.vol engine.tcp-fuse.vol.rpmsave <br>33d32<br>&lt;     option use-compound-fops off<br>64,66d62<br>&lt;     option rda-cache-limit 10MB<br>&lt;     option rda-request-size 131072<br>&lt;     option parallel-readdir off<br>[root@ovirt02 engine]# <br><br><br></div><div class="gmail_quote">The message related to glusterd service was misleading, because actually I verified that the file /usr/lib/systemd/system/glusterd.service was the same as before.<br><br><br></div><div class="gmail_quote"><br></div></div></div>