Hello
I hope you plan to add another brick or arbiter, as you are now prone
to split-brain and
other issues.
Yes I will add an other one, but I think this is not a problem. I ve set
cluster.server-quorum-ratio to 51% to avid the split brain problem. of course I know I
just have failure tolarance of one.
I ve solved the probelem with removeing the brick
gluster volume remove-brick data replica 3 kvm320.durchhalten.intern:/gluster_bricks/data/
force
Remove-brick force will not migrate files from the removed bricks, so they will no longer
be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: success
[root@kvm380 ~]# gluster volume heal data info summary
Brick kvm10:/gluster_bricks/data
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick kvm360.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0
Brick kvm380.durchhalten.intern:/gluster_bricks/data
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0
after this heal pending was ok for me.
than I removed all files - from this node - and added it back again.
now everything is find
Best Regards
Stefan