Hello Rudi,
Removing a brick from a replica 3 volume means that you are
reducing the replica count from 3 to 2. You are seeing the first error
because when you are trying to remove a brick from replica 3 volume you do
not need to migrate data as the same data is present in other two replicate
sets. What you could simply do is 'gluster volume remove-brick data replica
2 srv1:/gluster/data/brick1 force' which removes it from the volume.
Hope this helps !!!
Thanks
kasturi
On Wed, Nov 15, 2017 at 5:13 PM, Rudi Ahlers <rudiahlers(a)gmail.com> wrote:
Hi,
I am trying to remove a brick, from a server which is no longer part of
the gluster pool, but I keep running into errors for which I cannot find
answers on google.
[root@virt2 ~]# gluster peer status
Number of Peers: 3
Hostname: srv1
Uuid: 2bed7e51-430f-49f5-afbc-06f8cec9baeb
State: Peer in Cluster (Disconnected)
Hostname: srv3
Uuid: 0e78793c-deca-4e3b-a36f-2333c8f91825
State: Peer in Cluster (Connected)
Hostname: srv4
Uuid: 1a6eedc6-59eb-4329-b091-2b9bc6f0834f
State: Peer in Cluster (Connected)
[root@virt2 ~]#
[root@virt2 ~]# gluster volume info data
Volume Name: data
Type: Replicate
Volume ID: d09e4534-8bc0-4b30-be89-bc1ec2b439c7
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: srv1:/gluster/data/brick1
Brick2: srv2:/gluster/data/brick1
Brick3: srv3:/gluster/data/brick1
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: enable
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
features.shard-block-size: 512MB
[root@virt2 ~]# gluster volume remove-brick data replica 2
srv1:/gluster/data/brick1 start
volume remove-brick start: failed: Migration of data is not needed when
reducing replica count. Use the 'force' option
[root@virt2 ~]# gluster volume remove-brick data replica 2
srv1:/gluster/data/brick1 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: failed: Brick srv1:/gluster/data/brick1 is not
decommissioned. Use start or force option
The server virt1 is not part of the cluster anymore.
--
Kind Regards
Rudi Ahlers
Website:
http://www.rudiahlers.co.za
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users