<div dir="ltr"><div><div>Hello!<br><br><br></div>I haven&#39;t change the IP, nor reinstall nodes. All nodes are updated via yum. All I can think of was that after having some issue with gluster,from WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) , than, <u>manually</u>, from one of the nodes , remove bricks, then detach peers, probe them, add bricks again, bring the volume up, and readd storage domains from the webGUI.<br>
</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, May 21, 2014 at 4:26 PM, Kanagaraj <span dir="ltr">&lt;<a href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    What are the steps which led this situation?<br>
    <br>
    Did you re-install one of the nodes after forming the cluster or
    reboot which could have changed the ip?<div><div class="h5"><br>
    <br>
    <br>
    <div>On 05/21/2014 03:43 PM, Gabi C wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">
        <div>
          <div>On afected node:<br>
          </div>
          <div><br>
            gluster peer status<br>
            <br>
            gluster peer status<br>
            Number of Peers: 3<br>
            <br>
            Hostname: 10.125.1.194<br>
            Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654<br>
            State: Peer in Cluster (Connected)<br>
            <br>
            Hostname: 10.125.1.196<br>
            Uuid: c22e41b8-2818-4a96-a6df-a237517836d6<br>
            State: Peer in Cluster (Connected)<br>
            <br>
            Hostname: 10.125.1.194<br>
            Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654<br>
            State: Peer in Cluster (Connected)<br>
            <br>
            <br>
            <br>
            <br>
            <br>
          </div>
          ls -la /var/lib/gluster<br>
          <br>
          <br>
          <br>
          ls -la /var/lib/glusterd/peers/<br>
          total 20<br>
          drwxr-xr-x. 2 root root 4096 May 21 11:10 .<br>
          drwxr-xr-x. 9 root root 4096 May 21 11:09 ..<br>
          -rw-------. 1 root root   73 May 21 11:10
          85c2a08c-a955-47cc-a924-cf66c6814654<br>
          -rw-------. 1 root root   73 May 21 10:52
          c22e41b8-2818-4a96-a6df-a237517836d6<br>
          -rw-------. 1 root root   73 May 21 11:10
          d95558a0-a306-4812-aec2-a361a9ddde3e<br>
          <br>
          <br>
        </div>
        Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??<br>
        <br>
        <br>
        <br>
      </div>
      <div class="gmail_extra"><br>
        <br>
        <div class="gmail_quote">On Wed, May 21, 2014 at 12:00 PM,
          Kanagaraj <span dir="ltr">&lt;<a href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>&gt;</span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000">
              <div> <br>
                <div>On 05/21/2014 02:04 PM, Gabi C wrote:<br>
                </div>
                <blockquote type="cite">
                  <div dir="ltr">
                    <div>
                      <div>
                        <div>Hello!<br>
                          <br>
                        </div>
                        I have an ovirt setup, 3.4.1, up-to date, with
                        gluster package 3.5.0-3.fc19 on all 3 nodes.
                        Glusterfs setup is replicated on 3 bricks. On 2
                        nodes &#39;gluster peeer status&#39; raise 2 peer
                        connected with it&#39;s UUID. On third node &#39;gluster
                        peer status&#39; raise 3 peers, out of which, two
                        reffer to same node/IP but different UUID.<br>
                      </div>
                    </div>
                  </div>
                </blockquote>
                <br>
              </div>
              in every node you can find the peers in
              /var/lib/glusterd/peers/<br>
              <br>
              you can get the uuid of the current node using the command
              &quot;gluster system:: uuid get&quot;<br>
              <br>
              From this you can find which file is wrong in the above
              location.<br>
              <br>
              [Adding <a href="mailto:gluster-users@ovirt.org" target="_blank">gluster-users@ovirt.org</a>]<br>
              <br>
              <blockquote type="cite">
                <div>
                  <div dir="ltr">
                    <div>
                      <div> <br>
                      </div>
                      What I have tried:<br>
                    </div>
                    <div>- stopped gluster volumes, put 3rd node in
                      maintenace, reboor -&gt; no effect;<br>
                    </div>
                    <div>- stopped  volumes, removed bricks belonging to
                      3rd node, readded it, start volumes but still no
                      effect.<br>
                      <br>
                      <br>
                    </div>
                    <div>Any ideas, hints?<br>
                      <br>
                    </div>
                    <div>TIA<br>
                    </div>
                  </div>
                  <br>
                  <fieldset></fieldset>
                  <br>
                </div>
                <pre>_______________________________________________
Users mailing list
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
              </blockquote>
              <br>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br></div>