....just did it and seems to be OK!

Many thanks!



On Fri, May 23, 2014 at 3:11 PM, Vijay Bellur <vbellur@redhat.com> wrote:
On 05/23/2014 05:25 PM, Gabi C wrote:
On problematic node:

[root@virtual5 ~]# ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 16:33 .
drwxr-xr-x. 9 root root 4096 May 21 16:33 ..
-rw-------. 1 root root   73 May 21 16:33
85c2a08c-a955-47cc-a924-cf66c6814654
-rw-------. 1 root root   73 May 21 16:33
c22e41b8-2818-4a96-a6df-a237517836d6
-rw-------. 1 root root   73 May 21 16:33
d95558a0-a306-4812-aec2-a361a9ddde3e
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/85c2a08c-a955-47cc-a924-cf66c6814654
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/c22e41b8-2818-4a96-a6df-a237517836d6
uuid=c22e41b8-2818-4a96-a6df-a237517836d6
state=3
hostname1=10.125.1.196
[root@virtual5 ~]# cat
/var/lib/glusterd/peers/d95558a0-a306-4812-aec2-a361a9ddde3e
uuid=85c2a08c-a955-47cc-a924-cf66c6814654
state=3
hostname1=10.125.1.194


Looks like this is stale information for 10.125.1.194 that has somehow persisted. Deleting this file and then restarting glusterd on this node should lead to a consistent state for the peers.

Regards,
Vijay