
Hello! I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID. What I have tried: - stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect; - stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect. Any ideas, hints? TIA

This is a multi-part message in MIME format. --------------070400070900060609090803 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 05/21/2014 02:04 PM, Gabi C wrote:
Hello!
I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.
in every node you can find the peers in /var/lib/glusterd/peers/ you can get the uuid of the current node using the command "gluster system:: uuid get" From this you can find which file is wrong in the above location. [Adding gluster-users@ovirt.org]
What I have tried: - stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect; - stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.
Any ideas, hints?
TIA
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------070400070900060609090803 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <br> <div class="moz-cite-prefix">On 05/21/2014 02:04 PM, Gabi C wrote:<br> </div> <blockquote cite="mid:CAJcUOahVgur6nUarEh4VwJuRMQZ_8jFHFxqku_QwFDwQeskG=w@mail.gmail.com" type="cite"> <div dir="ltr"> <div> <div> <div>Hello!<br> <br> </div> I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.<br> </div> </div> </div> </blockquote> <br> in every node you can find the peers in /var/lib/glusterd/peers/<br> <br> you can get the uuid of the current node using the command "gluster system:: uuid get"<br> <br> From this you can find which file is wrong in the above location.<br> <br> [Adding <a class="moz-txt-link-abbreviated" href="mailto:gluster-users@ovirt.org">gluster-users@ovirt.org</a>]<br> <br> <blockquote cite="mid:CAJcUOahVgur6nUarEh4VwJuRMQZ_8jFHFxqku_QwFDwQeskG=w@mail.gmail.com" type="cite"> <div dir="ltr"> <div> <div> <br> </div> What I have tried:<br> </div> <div>- stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect;<br> </div> <div>- stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.<br> <br> <br> </div> <div>Any ideas, hints?<br> <br> </div> <div>TIA<br> </div> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------070400070900060609090803--

On afected node: gluster peer status gluster peer status Number of Peers: 3 Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected) Hostname: 10.125.1.196 Uuid: c22e41b8-2818-4a96-a6df-a237517836d6 State: Peer in Cluster (Connected) Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected) ls -la /var/lib/gluster ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 11:10 . drwxr-xr-x. 9 root root 4096 May 21 11:09 .. -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e?? On Wed, May 21, 2014 at 12:00 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
On 05/21/2014 02:04 PM, Gabi C wrote:
Hello!
I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.
in every node you can find the peers in /var/lib/glusterd/peers/
you can get the uuid of the current node using the command "gluster system:: uuid get"
From this you can find which file is wrong in the above location.
[Adding gluster-users@ovirt.org]
What I have tried: - stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect; - stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.
Any ideas, hints?
TIA
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users

..or should I: -stop volumes -remove brick belonging to the affected node -remove afected node/peer -add thenode, brick then start volumes? On Wed, May 21, 2014 at 1:13 PM, Gabi C <gabicr@gmail.com> wrote:
On afected node:
gluster peer status
gluster peer status Number of Peers: 3
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
Hostname: 10.125.1.196 Uuid: c22e41b8-2818-4a96-a6df-a237517836d6 State: Peer in Cluster (Connected)
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
ls -la /var/lib/gluster
ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 11:10 . drwxr-xr-x. 9 root root 4096 May 21 11:09 .. -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e
Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??
On Wed, May 21, 2014 at 12:00 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
On 05/21/2014 02:04 PM, Gabi C wrote:
Hello!
I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.
in every node you can find the peers in /var/lib/glusterd/peers/
you can get the uuid of the current node using the command "gluster system:: uuid get"
From this you can find which file is wrong in the above location.
[Adding gluster-users@ovirt.org]
What I have tried: - stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect; - stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.
Any ideas, hints?
TIA
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------090205070106000005050207 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit What are the steps which led this situation? Did you re-install one of the nodes after forming the cluster or reboot which could have changed the ip? On 05/21/2014 03:43 PM, Gabi C wrote:
On afected node:
gluster peer status
gluster peer status Number of Peers: 3
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
Hostname: 10.125.1.196 Uuid: c22e41b8-2818-4a96-a6df-a237517836d6 State: Peer in Cluster (Connected)
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
ls -la /var/lib/gluster
ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 11:10 . drwxr-xr-x. 9 root root 4096 May 21 11:09 .. -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e
Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??
On Wed, May 21, 2014 at 12:00 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
On 05/21/2014 02:04 PM, Gabi C wrote:
Hello!
I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.
in every node you can find the peers in /var/lib/glusterd/peers/
you can get the uuid of the current node using the command "gluster system:: uuid get"
From this you can find which file is wrong in the above location.
[Adding gluster-users@ovirt.org <mailto:gluster-users@ovirt.org>]
What I have tried: - stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect; - stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.
Any ideas, hints?
TIA
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
--------------090205070106000005050207 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> What are the steps which led this situation?<br> <br> Did you re-install one of the nodes after forming the cluster or reboot which could have changed the ip?<br> <br> <br> <div class="moz-cite-prefix">On 05/21/2014 03:43 PM, Gabi C wrote:<br> </div> <blockquote cite="mid:CAJcUOagOtvy4pWUhw2PQW2ckBBkzEttKPrm1To+egwUsctmDAw@mail.gmail.com" type="cite"> <div dir="ltr"> <div> <div>On afected node:<br> </div> <div><br> gluster peer status<br> <br> gluster peer status<br> Number of Peers: 3<br> <br> Hostname: 10.125.1.194<br> Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654<br> State: Peer in Cluster (Connected)<br> <br> Hostname: 10.125.1.196<br> Uuid: c22e41b8-2818-4a96-a6df-a237517836d6<br> State: Peer in Cluster (Connected)<br> <br> Hostname: 10.125.1.194<br> Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654<br> State: Peer in Cluster (Connected)<br> <br> <br> <br> <br> <br> </div> ls -la /var/lib/gluster<br> <br> <br> <br> ls -la /var/lib/glusterd/peers/<br> total 20<br> drwxr-xr-x. 2 root root 4096 May 21 11:10 .<br> drwxr-xr-x. 9 root root 4096 May 21 11:09 ..<br> -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654<br> -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6<br> -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e<br> <br> <br> </div> Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??<br> <br> <br> <br> </div> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On Wed, May 21, 2014 at 12:00 PM, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> <div class=""> <br> <div>On 05/21/2014 02:04 PM, Gabi C wrote:<br> </div> <blockquote type="cite"> <div dir="ltr"> <div> <div> <div>Hello!<br> <br> </div> I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.<br> </div> </div> </div> </blockquote> <br> </div> in every node you can find the peers in /var/lib/glusterd/peers/<br> <br> you can get the uuid of the current node using the command "gluster system:: uuid get"<br> <br> From this you can find which file is wrong in the above location.<br> <br> [Adding <a moz-do-not-send="true" href="mailto:gluster-users@ovirt.org" target="_blank">gluster-users@ovirt.org</a>]<br> <br> <blockquote type="cite"> <div class=""> <div dir="ltr"> <div> <div> <br> </div> What I have tried:<br> </div> <div>- stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect;<br> </div> <div>- stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.<br> <br> <br> </div> <div>Any ideas, hints?<br> <br> </div> <div>TIA<br> </div> </div> <br> <fieldset></fieldset> <br> </div> <pre>_______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </div> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------090205070106000005050207--

Hello! I haven't change the IP, nor reinstall nodes. All nodes are updated via yum. All I can think of was that after having some issue with gluster,from WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) , than, *manually*, from one of the nodes , remove bricks, then detach peers, probe them, add bricks again, bring the volume up, and readd storage domains from the webGUI. On Wed, May 21, 2014 at 4:26 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
What are the steps which led this situation?
Did you re-install one of the nodes after forming the cluster or reboot which could have changed the ip?
On 05/21/2014 03:43 PM, Gabi C wrote:
On afected node:
gluster peer status
gluster peer status Number of Peers: 3
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
Hostname: 10.125.1.196 Uuid: c22e41b8-2818-4a96-a6df-a237517836d6 State: Peer in Cluster (Connected)
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
ls -la /var/lib/gluster
ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 11:10 . drwxr-xr-x. 9 root root 4096 May 21 11:09 .. -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e
Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??
On Wed, May 21, 2014 at 12:00 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
On 05/21/2014 02:04 PM, Gabi C wrote:
Hello!
I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.
in every node you can find the peers in /var/lib/glusterd/peers/
you can get the uuid of the current node using the command "gluster system:: uuid get"
From this you can find which file is wrong in the above location.
[Adding gluster-users@ovirt.org]
What I have tried: - stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect; - stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.
Any ideas, hints?
TIA
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------030704090808050906060901 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Ok. I am not sure deleting the file or re-peer probe would be the right way to go. Gluster-users can help you here. On 05/21/2014 07:08 PM, Gabi C wrote:
Hello!
I haven't change the IP, nor reinstall nodes. All nodes are updated via yum. All I can think of was that after having some issue with gluster,from WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) , than, _manually_, from one of the nodes , remove bricks, then detach peers, probe them, add bricks again, bring the volume up, and readd storage domains from the webGUI.
On Wed, May 21, 2014 at 4:26 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
What are the steps which led this situation?
Did you re-install one of the nodes after forming the cluster or reboot which could have changed the ip?
On 05/21/2014 03:43 PM, Gabi C wrote:
On afected node:
gluster peer status
gluster peer status Number of Peers: 3
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
Hostname: 10.125.1.196 Uuid: c22e41b8-2818-4a96-a6df-a237517836d6 State: Peer in Cluster (Connected)
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
ls -la /var/lib/gluster
ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 11:10 . drwxr-xr-x. 9 root root 4096 May 21 11:09 .. -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e
Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??
On Wed, May 21, 2014 at 12:00 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
On 05/21/2014 02:04 PM, Gabi C wrote:
Hello!
I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.
in every node you can find the peers in /var/lib/glusterd/peers/
you can get the uuid of the current node using the command "gluster system:: uuid get"
From this you can find which file is wrong in the above location.
[Adding gluster-users@ovirt.org <mailto:gluster-users@ovirt.org>]
What I have tried: - stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect; - stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.
Any ideas, hints?
TIA
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
--------------030704090808050906060901 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Ok.<br> <br> I am not sure deleting the file or re-peer probe would be the right way to go.<br> <br> Gluster-users can help you here.<br> <br> <br> <div class="moz-cite-prefix">On 05/21/2014 07:08 PM, Gabi C wrote:<br> </div> <blockquote cite="mid:CAJcUOahWDVecSEE3iV_3Xts8V1_U-sz=CJh+GoqfSg_YOrjTNg@mail.gmail.com" type="cite"> <div dir="ltr"> <div> <div>Hello!<br> <br> <br> </div> I haven't change the IP, nor reinstall nodes. All nodes are updated via yum. All I can think of was that after having some issue with gluster,from WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) , than, <u>manually</u>, from one of the nodes , remove bricks, then detach peers, probe them, add bricks again, bring the volume up, and readd storage domains from the webGUI.<br> </div> </div> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On Wed, May 21, 2014 at 4:26 PM, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> What are the steps which led this situation?<br> <br> Did you re-install one of the nodes after forming the cluster or reboot which could have changed the ip? <div> <div class="h5"><br> <br> <br> <div>On 05/21/2014 03:43 PM, Gabi C wrote:<br> </div> <blockquote type="cite"> <div dir="ltr"> <div> <div>On afected node:<br> </div> <div><br> gluster peer status<br> <br> gluster peer status<br> Number of Peers: 3<br> <br> Hostname: 10.125.1.194<br> Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654<br> State: Peer in Cluster (Connected)<br> <br> Hostname: 10.125.1.196<br> Uuid: c22e41b8-2818-4a96-a6df-a237517836d6<br> State: Peer in Cluster (Connected)<br> <br> Hostname: 10.125.1.194<br> Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654<br> State: Peer in Cluster (Connected)<br> <br> <br> <br> <br> <br> </div> ls -la /var/lib/gluster<br> <br> <br> <br> ls -la /var/lib/glusterd/peers/<br> total 20<br> drwxr-xr-x. 2 root root 4096 May 21 11:10 .<br> drwxr-xr-x. 9 root root 4096 May 21 11:09 ..<br> -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654<br> -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6<br> -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e<br> <br> <br> </div> Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??<br> <br> <br> <br> </div> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On Wed, May 21, 2014 at 12:00 PM, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> <div> <br> <div>On 05/21/2014 02:04 PM, Gabi C wrote:<br> </div> <blockquote type="cite"> <div dir="ltr"> <div> <div> <div>Hello!<br> <br> </div> I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.<br> </div> </div> </div> </blockquote> <br> </div> in every node you can find the peers in /var/lib/glusterd/peers/<br> <br> you can get the uuid of the current node using the command "gluster system:: uuid get"<br> <br> From this you can find which file is wrong in the above location.<br> <br> [Adding <a moz-do-not-send="true" href="mailto:gluster-users@ovirt.org" target="_blank">gluster-users@ovirt.org</a>]<br> <br> <blockquote type="cite"> <div> <div dir="ltr"> <div> <div> <br> </div> What I have tried:<br> </div> <div>- stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect;<br> </div> <div>- stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.<br> <br> <br> </div> <div>Any ideas, hints?<br> <br> </div> <div>TIA<br> </div> </div> <br> <fieldset></fieldset> <br> </div> <pre>_______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </div> </blockquote> </div> <br> </div> </blockquote> <br> </div> </div> </div> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------030704090808050906060901--

On 05/21/2014 07:22 PM, Kanagaraj wrote:
Ok.
I am not sure deleting the file or re-peer probe would be the right way to go.
Gluster-users can help you here.
On 05/21/2014 07:08 PM, Gabi C wrote:
Hello!
I haven't change the IP, nor reinstall nodes. All nodes are updated via yum. All I can think of was that after having some issue with gluster,from WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) , than, _manually_, from one of the nodes , remove bricks, then detach peers, probe them, add bricks again, bring the volume up, and readd storage domains from the webGUI.
On Wed, May 21, 2014 at 4:26 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
What are the steps which led this situation?
Did you re-install one of the nodes after forming the cluster or reboot which could have changed the ip?
On 05/21/2014 03:43 PM, Gabi C wrote:
On afected node:
gluster peer status
gluster peer status Number of Peers: 3
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
Hostname: 10.125.1.196 Uuid: c22e41b8-2818-4a96-a6df-a237517836d6 State: Peer in Cluster (Connected)
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
ls -la /var/lib/gluster
ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 11:10 . drwxr-xr-x. 9 root root 4096 May 21 11:09 .. -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e
Can you please check the output of cat /var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e ? If it does contain information about the duplicated peer and none of the other 2 nodes do have this file in /var/lib/glusterd/peers/, the file can be moved out of /var/lib/glusterd or deleted. Regards, Vijay

On problematic node: [root@virtual5 ~]# ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 16:33 . drwxr-xr-x. 9 root root 4096 May 21 16:33 .. -rw-------. 1 root root 73 May 21 16:33 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 16:33 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 16:33 d95558a0-a306-4812-aec2-a361a9ddde3e [root@virtual5 ~]# cat /var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654 uuid=85c2a08c-a955-47cc-a924-cf66c6814654 state=3 hostname1=10.125.1.194 [root@virtual5 ~]# cat /var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6 uuid=c22e41b8-2818-4a96-a6df-a237517836d6 state=3 hostname1=10.125.1.196 [root@virtual5 ~]# cat /var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e uuid=85c2a08c-a955-47cc-a924-cf66c6814654 state=3 hostname1=10.125.1.194 on other 2 nodes [root@virtual4 ~]# ls -la /var/lib/glusterd/peers/ total 16 drwxr-xr-x. 2 root root 4096 May 21 16:34 . drwxr-xr-x. 9 root root 4096 May 21 16:34 .. -rw-------. 1 root root 73 May 21 16:34 bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84 -rw-------. 1 root root 73 May 21 11:09 c22e41b8-2818-4a96-a6df-a237517836d6 [root@virtual4 ~]# cat /var/lib/glusterd/peers/bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84 uuid=bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84 state=3 hostname1=10.125.1.195 [root@virtual4 ~]# cat /var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6 uuid=c22e41b8-2818-4a96-a6df-a237517836d6 state=3 hostname1=10.125.1.196 [root@virtual6 ~]# ls -la /var/lib/glusterd/peers/ total 16 drwxr-xr-x. 2 root root 4096 May 21 16:34 . drwxr-xr-x. 9 root root 4096 May 21 16:34 .. -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 16:34 bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84 [root@virtual6 ~]# cat /var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654 uuid=85c2a08c-a955-47cc-a924-cf66c6814654 state=3 hostname1=10.125.1.194 [root@virtual6 ~]# cat /var/lib/glusterd/peers/bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84 uuid=bd2e35c6-bb9a-4ec0-a6e4-23baa123dd84 state=3 hostname1=10.125.1.195 [root@virtual6 ~]# On Fri, May 23, 2014 at 2:05 PM, Vijay Bellur <vbellur@redhat.com> wrote:
On 05/21/2014 07:22 PM, Kanagaraj wrote:
Ok.
I am not sure deleting the file or re-peer probe would be the right way to go.
Gluster-users can help you here.
On 05/21/2014 07:08 PM, Gabi C wrote:
Hello!
I haven't change the IP, nor reinstall nodes. All nodes are updated via yum. All I can think of was that after having some issue with gluster,from WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) , than, _manually_, from one of the nodes , remove
bricks, then detach peers, probe them, add bricks again, bring the volume up, and readd storage domains from the webGUI.
On Wed, May 21, 2014 at 4:26 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
What are the steps which led this situation?
Did you re-install one of the nodes after forming the cluster or reboot which could have changed the ip?
On 05/21/2014 03:43 PM, Gabi C wrote:
On afected node:
gluster peer status
gluster peer status Number of Peers: 3
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
Hostname: 10.125.1.196 Uuid: c22e41b8-2818-4a96-a6df-a237517836d6 State: Peer in Cluster (Connected)
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
ls -la /var/lib/gluster
ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 11:10 . drwxr-xr-x. 9 root root 4096 May 21 11:09 .. -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e
Can you please check the output of cat /var/lib/glusterd/peers/ d95558a0-a306-4812-aec2-a361a9ddde3e ?
If it does contain information about the duplicated peer and none of the other 2 nodes do have this file in /var/lib/glusterd/peers/, the file can be moved out of /var/lib/glusterd or deleted.
Regards, Vijay

On 05/23/2014 05:25 PM, Gabi C wrote:
On problematic node:
[root@virtual5 ~]# ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 16:33 . drwxr-xr-x. 9 root root 4096 May 21 16:33 .. -rw-------. 1 root root 73 May 21 16:33 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 16:33 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 16:33 d95558a0-a306-4812-aec2-a361a9ddde3e [root@virtual5 ~]# cat /var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654 uuid=85c2a08c-a955-47cc-a924-cf66c6814654 state=3 hostname1=10.125.1.194 [root@virtual5 ~]# cat /var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6 uuid=c22e41b8-2818-4a96-a6df-a237517836d6 state=3 hostname1=10.125.1.196 [root@virtual5 ~]# cat /var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e uuid=85c2a08c-a955-47cc-a924-cf66c6814654 state=3 hostname1=10.125.1.194
Looks like this is stale information for 10.125.1.194 that has somehow persisted. Deleting this file and then restarting glusterd on this node should lead to a consistent state for the peers. Regards, Vijay

....just did it and seems to be OK! Many thanks! On Fri, May 23, 2014 at 3:11 PM, Vijay Bellur <vbellur@redhat.com> wrote:
On 05/23/2014 05:25 PM, Gabi C wrote:
On problematic node:
[root@virtual5 ~]# ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 16:33 . drwxr-xr-x. 9 root root 4096 May 21 16:33 .. -rw-------. 1 root root 73 May 21 16:33 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 16:33 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 16:33 d95558a0-a306-4812-aec2-a361a9ddde3e [root@virtual5 ~]# cat /var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654 uuid=85c2a08c-a955-47cc-a924-cf66c6814654 state=3 hostname1=10.125.1.194 [root@virtual5 ~]# cat /var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6 uuid=c22e41b8-2818-4a96-a6df-a237517836d6 state=3 hostname1=10.125.1.196 [root@virtual5 ~]# cat /var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e uuid=85c2a08c-a955-47cc-a924-cf66c6814654 state=3 hostname1=10.125.1.194
Looks like this is stale information for 10.125.1.194 that has somehow persisted. Deleting this file and then restarting glusterd on this node should lead to a consistent state for the peers.
Regards, Vijay
participants (3)
-
Gabi C
-
Kanagaraj
-
Vijay Bellur