[ovirt-users] messed up gluster attempt
Sahina Bose
sabose at redhat.com
Fri Oct 28 04:46:10 UTC 2016
On Fri, Oct 28, 2016 at 8:14 AM, Thing <thing.thing at gmail.com> wrote:
> Hi,
>
> So was was trying to make a 3 way mirror and it reported failed. Now I
> get these messages,
>
> On glusterp1,
>
> =========
> [root at glusterp1 ~]# gluster peer status
> Number of Peers: 1
>
> Hostname: 192.168.1.32
> Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
> State: Peer in Cluster (Connected)
> [root at glusterp1 ~]# gluster peer probe glusterp3.graywitch.co.nz
> peer probe: failed: glusterp3.graywitch.co.nz is either already part of
> another cluster or having volumes configured
> [root at glusterp1 ~]# gluster volume info
> No volumes present
> [root at glusterp1 ~]#
> =========
>
> on glusterp2,
>
> =========
> [root at glusterp2 ~]# systemctl status glusterd.service
> ● glusterd.service - GlusterFS, a clustered file-system server
> Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
> vendor preset: disabled)
> Active: active (running) since Fri 2016-10-28 15:22:34 NZDT; 5min ago
> Main PID: 16779 (glusterd)
> CGroup: /system.slice/glusterd.service
> └─16779 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
> INFO
>
> Oct 28 15:22:32 glusterp2.graywitch.co.nz systemd[1]: Starting GlusterFS,
> a clustered file-system server...
> Oct 28 15:22:34 glusterp2.graywitch.co.nz systemd[1]: Started GlusterFS,
> a clustered file-system server.
> [root at glusterp2 ~]# gluster volume info
> No volumes present
> [root at glusterp2 ~]# gluster peer status
> Number of Peers: 2
>
> Hostname: 192.168.1.33
> Uuid: 0fde5a5b-6254-4931-b704-40a88d4e89ce
> State: Sent and Received peer request (Connected)
>
> Hostname: 192.168.1.31
> Uuid: a29a93ee-e03a-46b0-a168-4d5e224d5f02
> State: Peer in Cluster (Connected)
> [root at glusterp2 ~]#
> ==========
>
> on glusterp3,
>
> ==========
> [root at glusterp3 glusterd]# systemctl status glusterd.service
> ● glusterd.service - GlusterFS, a clustered file-system server
> Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
> vendor preset: disabled)
> Active: active (running) since Fri 2016-10-28 15:26:40 NZDT; 1min 16s
> ago
> Main PID: 7033 (glusterd)
> CGroup: /system.slice/glusterd.service
> └─7033 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
> INFO
>
> Oct 28 15:26:37 glusterp3.graywitch.co.nz systemd[1]: Starting GlusterFS,
> a clustered file-system server...
> Oct 28 15:26:40 glusterp3.graywitch.co.nz systemd[1]: Started GlusterFS,
> a clustered file-system server.
> [root at glusterp3 glusterd]# gluster volume info
> No volumes present
> [root at glusterp3 glusterd]# gluster peer probe glusterp1.graywitch.co.nz
> peer probe: failed: glusterp1.graywitch.co.nz is either already part of
> another cluster or having volumes configured
> [root at glusterp3 glusterd]# gluster volume info
> No volumes present
> [root at glusterp3 glusterd]# gluster peer status
> Number of Peers: 1
>
> Hostname: glusterp2.graywitch.co.nz
> Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
> State: Sent and Received peer request (Connected)
> [root at glusterp3 glusterd]#
> ===========
>
> How do I clean this mess up?
>
I'm assuming you don't have any data in these volumes - in which case you
can clean up the entire setup and start over again:
On all three nodes, stop glusterd service (systemctl stop glusterd), remove
the contents under /var/lib/glusterd/vols and /var/lib/glusterd/peers and
restart glusterd.
You can then create your cluster again. If you're reusing brick directories
from previous run, make sure to clean up those as well
>
> thanks
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161028/f6e221f0/attachment-0001.html>
More information about the Users
mailing list