<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jul 5, 2017 at 3:10 AM, Gianluca Cecchi <span dir="ltr">&lt;<a target="_blank" href="mailto:gianluca.cecchi@gmail.com">gianluca.cecchi@gmail.com</a>&gt;</span> wrote:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-">On Tue, Jul 4, 2017 at 2:57 PM, Gianluca Cecchi <span dir="ltr">&lt;<a target="_blank" href="mailto:gianluca.cecchi@gmail.com">gianluca.cecchi@gmail.com</a>&gt;</span> wrote:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><span class="gmail-m_834052203871927088gmail-"></span><div class="gmail_extra"><span class="gmail-m_834052203871927088gmail-"><span style="font-size:12.8px" class="gmail-m_834052203871927088gmail-m_788177822560164983gmail-im"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-m_834052203871927088gmail-m_788177822560164983gmail-m_-7846791724459284980gmail-"><div><br></div></span><div>No, it&#39;s not. One option is to update glusterfs packages to 3.10.<br></div></div></div></div></blockquote><div><br></div></span></span><span class="gmail-m_834052203871927088gmail-"><div style="font-size:12.8px">Is it supported throughout oVirt to use CentOS Storage SIG packages instead of ovirt provided ones? I imagine you mean it, correct?</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">If this is a case, would I have to go with Gluster 3.9 (non LTS)</div><div style="font-size:12.8px"><a target="_blank" href="https://lists.centos.org/pipermail/centos-announce/2017-January/022249.html">https://lists.centos.org/piper<wbr>mail/centos-announce/2017-Janu<wbr>ary/022249.html</a><br></div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">Or Gluster 3.10 (LTS)</div><div style="font-size:12.8px"><a target="_blank" href="https://lists.centos.org/pipermail/centos-announce/2017-March/022337.html">https://lists.centos.org/piper<wbr>mail/centos-announce/2017-Marc<wbr>h/022337.html</a><br></div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">I suppose the latter...</div><div style="font-size:12.8px">Any problem then with updates of oVirt itself, eg going through 4.1.2 to 4.1.3?</div><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">Thanks</div><div style="font-size:12.8px">Gianluca</div></span></div><span class="gmail-m_834052203871927088gmail-m_788177822560164983gmail-m_-7846791724459284980gmail-"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><br><div>Is 3.9 version of Gluster packages provided when updating to upcoming 4.1.3, perhaps?</div></div></div></div></blockquote></span></div></blockquote><div><br></div></span><div>Never mind, I will verify. At the end this is a test system.<br></div><div>I put the nodes in maintenance one by one and then installed glusterfs 3.10 with;<br><br>yum install centos-release-gluster<br></div><div>yum update<br><br></div><div>All were able to self heal then and I see the 4 storage domains (engine, data, iso, export) up and running.<br></div><div>See some notes at the end of the e-mail.<br></div><div>Now I&#39;m ready to test the change of gluster network traffic.</div><br></div><div class="gmail_quote">In my case the current hostnames that are also matching the ovirtmgmt network are <a target="_blank" href="http://ovirt0N.localdomain.com">ovirt0N.localdomain.com</a> with N=1,2,3<br><br></div><div class="gmail_quote">On my vlan2, defined as gluster network role in the cluster, I have defined (on each node /etc/hosts file) the hostnames <br><br>10.10.2.102 gl01.localdomain.local gl01<br>10.10.2.103 gl02.localdomain.local gl02<br>10.10.2.104 gl03.localdomain.local gl03<br><br></div><div class="gmail_quote">I need more details about command to run:<br><br></div><div class="gmail_quote">Currently I have<br><br>[root@ovirt03 ~]# gluster peer status<br>Number of Peers: 2<br><span class="gmail-"><br>Hostname: ovirt01.localdomain.local<br>Uuid: e9717281-a356-42aa-a579-<wbr>a4647a29a0bc<br>State: Peer in Cluster (Connected)<br></span>Other names:<br>10.10.2.102<span class="gmail-"><br><br>Hostname: ovirt02.localdomain.local<br>Uuid: b89311fe-257f-4e44-8e15-<wbr>9bff6245d689<br>State: Peer in Cluster (Connected)<br></span>Other names:<br>10.10.2.103<br><br></div><div class="gmail_quote">Suppose I start form export volume, that has these info:<br><br>[root@ovirt03 ~]# gluster volume info export<br> <br>Volume Name: export<br>Type: Replicate<br>Volume ID: b00e5839-becb-47e7-844f-<wbr>6ce6ce1b7153<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (2 + 1) = 3<br>Transport-type: tcp<br>Bricks:<br>Brick1: ovirt01.localdomain.local:/<wbr>gluster/brick3/export<br>Brick2: ovirt02.localdomain.local:/<wbr>gluster/brick3/export<br>Brick3: ovirt03.localdomain.local:/<wbr>gluster/brick3/export (arbiter)<br>...<br><br>then the commands I need to run would be:<br><br>gluster volume reset-brick export ovirt01.localdomain.local:/<wbr>gluster/brick3/export start<br>gluster volume reset-brick export ovirt01.localdomain.local:/<wbr>gluster/brick3/export gl01.localdomain.local:/<wbr>gluster/brick3/export commit force<br><br></div><div class="gmail_quote">Correct?<br></div></div></div></blockquote><div><br></div><div>Yes, correct. gl01.localdomain.local should resolve correctly on all 3 nodes.<br><br> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><br></div><div class="gmail_quote">Is it sufficient to run it on a single node? And then on the same node, to run also for the other bricks of the same volume:<br></div></div></div></blockquote><div><br></div><div>Yes, it is sufficient to run on single node. You can run the reset-brick for all bricks from same node.<br> <br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><br>gluster volume reset-brick export ovirt02.localdomain.local:/<wbr>gluster/brick3/export start<br>gluster
 volume reset-brick export 
ovirt02.localdomain.local:/<wbr>gluster/brick3/export 
gl02.localdomain.local:/<wbr>gluster/brick3/export commit force<br><br></div><div class="gmail_quote">and<br></div><div class="gmail_quote"><br>gluster volume reset-brick export ovirt03.localdomain.local:/<wbr>gluster/brick3/export start<br>gluster
 volume reset-brick export 
ovirt03.localdomain.local:/<wbr>gluster/brick3/export 
gl03.localdomain.local:/<wbr>gluster/brick3/export commit force</div><div class="gmail_quote"><br></div><div class="gmail_quote">Correct? Do I have to wait self-heal after each commit command, before proceeding with the other ones?<br></div></div></div></blockquote><div><br></div><div>Ideally, gluster should recognize this as same brick as before, and heal will not be needed. Please confirm that it is indeed the case before proceeding<br> <br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"></div><div class="gmail_quote"><br></div><div class="gmail_quote">Thanks in advance for input so that I can test it.<br><br></div><div class="gmail_quote">Gianluca<br><br><br></div><div class="gmail_quote">NOTE: during the update of gluster packages from 3.8 to 3.10 I got these:<br></div><div class="gmail_quote"><br>warning: /var/lib/glusterd/vols/engine/<wbr>engine.ovirt01.localdomain.<wbr>local.gluster-brick1-engine.<wbr>vol saved as /var/lib/glusterd/vols/engine/<wbr>engine.ovirt01.localdomain.<wbr>local.gluster-brick1-engine.<wbr>vol.rpmsave<br>warning: /var/lib/glusterd/vols/engine/<wbr>engine.ovirt02.localdomain.<wbr>local.gluster-brick1-engine.<wbr>vol saved as /var/lib/glusterd/vols/engine/<wbr>engine.ovirt02.localdomain.<wbr>local.gluster-brick1-engine.<wbr>vol.rpmsave<br>warning: /var/lib/glusterd/vols/engine/<wbr>engine.ovirt03.localdomain.<wbr>local.gluster-brick1-engine.<wbr>vol saved as /var/lib/glusterd/vols/engine/<wbr>engine.ovirt03.localdomain.<wbr>local.gluster-brick1-engine.<wbr>vol.rpmsave<br>warning: /var/lib/glusterd/vols/engine/<wbr>trusted-engine.tcp-fuse.vol saved as /var/lib/glusterd/vols/engine/<wbr>trusted-engine.tcp-fuse.vol.<wbr>rpmsave<br>warning: /var/lib/glusterd/vols/engine/<wbr>engine.tcp-fuse.vol saved as /var/lib/glusterd/vols/engine/<wbr>engine.tcp-fuse.vol.rpmsave<br>warning: /var/lib/glusterd/vols/data/<wbr>data.ovirt01.localdomain.<wbr>local.gluster-brick2-data.vol saved as /var/lib/glusterd/vols/data/<wbr>data.ovirt01.localdomain.<wbr>local.gluster-brick2-data.vol.<wbr>rpmsave<br>warning: /var/lib/glusterd/vols/data/<wbr>data.ovirt02.localdomain.<wbr>local.gluster-brick2-data.vol saved as /var/lib/glusterd/vols/data/<wbr>data.ovirt02.localdomain.<wbr>local.gluster-brick2-data.vol.<wbr>rpmsave<br>warning: /var/lib/glusterd/vols/data/<wbr>data.ovirt03.localdomain.<wbr>local.gluster-brick2-data.vol saved as /var/lib/glusterd/vols/data/<wbr>data.ovirt03.localdomain.<wbr>local.gluster-brick2-data.vol.<wbr>rpmsave<br>warning: /var/lib/glusterd/vols/data/<wbr>trusted-data.tcp-fuse.vol saved as /var/lib/glusterd/vols/data/<wbr>trusted-data.tcp-fuse.vol.<wbr>rpmsave<br>warning: /var/lib/glusterd/vols/data/<wbr>data.tcp-fuse.vol saved as /var/lib/glusterd/vols/data/<wbr>data.tcp-fuse.vol.rpmsave<br>warning: /var/lib/glusterd/vols/export/<wbr>export.ovirt01.localdomain.<wbr>local.gluster-brick3-export.<wbr>vol saved as /var/lib/glusterd/vols/export/<wbr>export.ovirt01.localdomain.<wbr>local.gluster-brick3-export.<wbr>vol.rpmsave<br>warning: /var/lib/glusterd/vols/export/<wbr>export.ovirt02.localdomain.<wbr>local.gluster-brick3-export.<wbr>vol saved as /var/lib/glusterd/vols/export/<wbr>export.ovirt02.localdomain.<wbr>local.gluster-brick3-export.<wbr>vol.rpmsave<br>warning: /var/lib/glusterd/vols/export/<wbr>export.ovirt03.localdomain.<wbr>local.gluster-brick3-export.<wbr>vol saved as /var/lib/glusterd/vols/export/<wbr>export.ovirt03.localdomain.<wbr>local.gluster-brick3-export.<wbr>vol.rpmsave<br>warning: /var/lib/glusterd/vols/export/<wbr>trusted-export.tcp-fuse.vol saved as /var/lib/glusterd/vols/export/<wbr>trusted-export.tcp-fuse.vol.<wbr>rpmsave<br>warning: /var/lib/glusterd/vols/export/<wbr>export.tcp-fuse.vol saved as /var/lib/glusterd/vols/export/<wbr>export.tcp-fuse.vol.rpmsave<br>warning: /var/lib/glusterd/vols/iso/<wbr>iso.ovirt01.localdomain.local.<wbr>gluster-brick4-iso.vol saved as /var/lib/glusterd/vols/iso/<wbr>iso.ovirt01.localdomain.local.<wbr>gluster-brick4-iso.vol.rpmsave<br>warning: /var/lib/glusterd/vols/iso/<wbr>iso.ovirt02.localdomain.local.<wbr>gluster-brick4-iso.vol saved as /var/lib/glusterd/vols/iso/<wbr>iso.ovirt02.localdomain.local.<wbr>gluster-brick4-iso.vol.rpmsave<br>warning: /var/lib/glusterd/vols/iso/<wbr>iso.ovirt03.localdomain.local.<wbr>gluster-brick4-iso.vol saved as /var/lib/glusterd/vols/iso/<wbr>iso.ovirt03.localdomain.local.<wbr>gluster-brick4-iso.vol.rpmsave<br>warning: /var/lib/glusterd/vols/iso/<wbr>trusted-iso.tcp-fuse.vol saved as /var/lib/glusterd/vols/iso/<wbr>trusted-iso.tcp-fuse.vol.<wbr>rpmsave<br>warning: /var/lib/glusterd/vols/iso/<wbr>iso.tcp-fuse.vol saved as /var/lib/glusterd/vols/iso/<wbr>iso.tcp-fuse.vol.rpmsave<br>  Installing : python2-gluster-3.10.3-1.el7.<wbr>x86_64                        <wbr>                              <wbr>       9/20 <br>  Installing : python-prettytable-0.7.2-2.<wbr>el7.centos.noarch             <wbr>                              <wbr>        10/20 <br>  Updating   : glusterfs-geo-replication-3.<wbr>10.3-1.el7.x86_64             <wbr>                              <wbr>       11/20 <br>Warning: glusterd.service changed on disk. Run &#39;systemctl daemon-reload&#39; to reload units.<br>  Cleanup    : glusterfs-geo-replication-3.8.<wbr>13-1.el7.x86_64               <wbr>                              <wbr>     12/20 <br>Warning: glusterd.service changed on disk. Run &#39;systemctl daemon-reload&#39; to reload units.<br><br></div><div class="gmail_quote">For each volume the differences were these:<br></div><div class="gmail_quote"><br>[root@ovirt02 engine]# diff engine.ovirt01.localdomain.<wbr>local.gluster-brick1-engine.<wbr>vol engine.ovirt01.localdomain.<wbr>local.gluster-brick1-engine.<wbr>vol.rpmsave<br>19,20c19,20<br>&lt;     option sql-db-wal-autocheckpoint 25000<br>&lt;     option sql-db-cachesize 12500<br>---<br>&gt;     option sql-db-wal-autocheckpoint 1000<br>&gt;     option sql-db-cachesize 1000<br>127c127<br>&lt; volume engine-io-stats<br>---<br>&gt; volume /gluster/brick1/engine<br>132d131<br>&lt;     option unique-id /gluster/brick1/engine<br>136c135<br>&lt; volume /gluster/brick1/engine<br>---<br>&gt; volume engine-decompounder<br>138c137<br>&lt;     subvolumes engine-io-stats<br>---<br>&gt;     subvolumes /gluster/brick1/engine<br>149c148<br>&lt;     subvolumes /gluster/brick1/engine<br>---<br>&gt;     subvolumes engine-decompounder<br>[root@ovirt02 engine]# <br><br><br>[root@ovirt02 engine]# diff trusted-engine.tcp-fuse.vol trusted-engine.tcp-fuse.vol.<wbr>rpmsave <br>39d38<br>&lt;     option use-compound-fops off<br>70,72d68<br>&lt;     option rda-cache-limit 10MB<br>&lt;     option rda-request-size 131072<br>&lt;     option parallel-readdir off<br>[root@ovirt02 engine]# <br><br><br><br>[root@ovirt02 engine]# diff engine.tcp-fuse.vol engine.tcp-fuse.vol.rpmsave <br>33d32<br>&lt;     option use-compound-fops off<br>64,66d62<br>&lt;     option rda-cache-limit 10MB<br>&lt;     option rda-request-size 131072<br>&lt;     option parallel-readdir off<br>[root@ovirt02 engine]# <br><br><br></div><div class="gmail_quote">The message related to glusterd service was misleading, because actually I verified that the file /usr/lib/systemd/system/<wbr>glusterd.service was the same as before.<br></div></div></div></blockquote><div> <br></div></div><br></div></div>