
This is a multi-part message in MIME format. --------------030704090808050906060901 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Ok. I am not sure deleting the file or re-peer probe would be the right way to go. Gluster-users can help you here. On 05/21/2014 07:08 PM, Gabi C wrote:
Hello!
I haven't change the IP, nor reinstall nodes. All nodes are updated via yum. All I can think of was that after having some issue with gluster,from WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) , than, _manually_, from one of the nodes , remove bricks, then detach peers, probe them, add bricks again, bring the volume up, and readd storage domains from the webGUI.
On Wed, May 21, 2014 at 4:26 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
What are the steps which led this situation?
Did you re-install one of the nodes after forming the cluster or reboot which could have changed the ip?
On 05/21/2014 03:43 PM, Gabi C wrote:
On afected node:
gluster peer status
gluster peer status Number of Peers: 3
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
Hostname: 10.125.1.196 Uuid: c22e41b8-2818-4a96-a6df-a237517836d6 State: Peer in Cluster (Connected)
Hostname: 10.125.1.194 Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654 State: Peer in Cluster (Connected)
ls -la /var/lib/gluster
ls -la /var/lib/glusterd/peers/ total 20 drwxr-xr-x. 2 root root 4096 May 21 11:10 . drwxr-xr-x. 9 root root 4096 May 21 11:09 .. -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654 -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6 -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e
Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??
On Wed, May 21, 2014 at 12:00 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
On 05/21/2014 02:04 PM, Gabi C wrote:
Hello!
I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.
in every node you can find the peers in /var/lib/glusterd/peers/
you can get the uuid of the current node using the command "gluster system:: uuid get"
From this you can find which file is wrong in the above location.
[Adding gluster-users@ovirt.org <mailto:gluster-users@ovirt.org>]
What I have tried: - stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect; - stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.
Any ideas, hints?
TIA
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
--------------030704090808050906060901 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=UTF-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Ok.<br> <br> I am not sure deleting the file or re-peer probe would be the right way to go.<br> <br> Gluster-users can help you here.<br> <br> <br> <div class="moz-cite-prefix">On 05/21/2014 07:08 PM, Gabi C wrote:<br> </div> <blockquote cite="mid:CAJcUOahWDVecSEE3iV_3Xts8V1_U-sz=CJh+GoqfSg_YOrjTNg@mail.gmail.com" type="cite"> <div dir="ltr"> <div> <div>Hello!<br> <br> <br> </div> I haven't change the IP, nor reinstall nodes. All nodes are updated via yum. All I can think of was that after having some issue with gluster,from WebGUI I deleted VM, deactivate and detach storage domains ( I have 2) , than, <u>manually</u>, from one of the nodes , remove bricks, then detach peers, probe them, add bricks again, bring the volume up, and readd storage domains from the webGUI.<br> </div> </div> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On Wed, May 21, 2014 at 4:26 PM, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> What are the steps which led this situation?<br> <br> Did you re-install one of the nodes after forming the cluster or reboot which could have changed the ip? <div> <div class="h5"><br> <br> <br> <div>On 05/21/2014 03:43 PM, Gabi C wrote:<br> </div> <blockquote type="cite"> <div dir="ltr"> <div> <div>On afected node:<br> </div> <div><br> gluster peer status<br> <br> gluster peer status<br> Number of Peers: 3<br> <br> Hostname: 10.125.1.194<br> Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654<br> State: Peer in Cluster (Connected)<br> <br> Hostname: 10.125.1.196<br> Uuid: c22e41b8-2818-4a96-a6df-a237517836d6<br> State: Peer in Cluster (Connected)<br> <br> Hostname: 10.125.1.194<br> Uuid: 85c2a08c-a955-47cc-a924-cf66c6814654<br> State: Peer in Cluster (Connected)<br> <br> <br> <br> <br> <br> </div> ls -la /var/lib/gluster<br> <br> <br> <br> ls -la /var/lib/glusterd/peers/<br> total 20<br> drwxr-xr-x. 2 root root 4096 May 21 11:10 .<br> drwxr-xr-x. 9 root root 4096 May 21 11:09 ..<br> -rw-------. 1 root root 73 May 21 11:10 85c2a08c-a955-47cc-a924-cf66c6814654<br> -rw-------. 1 root root 73 May 21 10:52 c22e41b8-2818-4a96-a6df-a237517836d6<br> -rw-------. 1 root root 73 May 21 11:10 d95558a0-a306-4812-aec2-a361a9ddde3e<br> <br> <br> </div> Shoul I delete d95558a0-a306-4812-aec2-a361a9ddde3e??<br> <br> <br> <br> </div> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On Wed, May 21, 2014 at 12:00 PM, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> <div> <br> <div>On 05/21/2014 02:04 PM, Gabi C wrote:<br> </div> <blockquote type="cite"> <div dir="ltr"> <div> <div> <div>Hello!<br> <br> </div> I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19 on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes 'gluster peeer status' raise 2 peer connected with it's UUID. On third node 'gluster peer status' raise 3 peers, out of which, two reffer to same node/IP but different UUID.<br> </div> </div> </div> </blockquote> <br> </div> in every node you can find the peers in /var/lib/glusterd/peers/<br> <br> you can get the uuid of the current node using the command "gluster system:: uuid get"<br> <br> From this you can find which file is wrong in the above location.<br> <br> [Adding <a moz-do-not-send="true" href="mailto:gluster-users@ovirt.org" target="_blank">gluster-users@ovirt.org</a>]<br> <br> <blockquote type="cite"> <div> <div dir="ltr"> <div> <div> <br> </div> What I have tried:<br> </div> <div>- stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect;<br> </div> <div>- stopped volumes, removed bricks belonging to 3rd node, readded it, start volumes but still no effect.<br> <br> <br> </div> <div>Any ideas, hints?<br> <br> </div> <div>TIA<br> </div> </div> <br> <fieldset></fieldset> <br> </div> <pre>_______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </div> </blockquote> </div> <br> </div> </blockquote> <br> </div> </div> </div> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------030704090808050906060901--