Hello,
Yesterday I had to remove the brick of my first server (HCI with 3 servers) for
maintenance and recover hard disks.
3 servers with 4 disks per server in raid5. 1 brick per server
i did :
gluster volume remove-brick data replica 2
ovnode1s.telecom.lan:/gluster_bricks/datassd/datassd force
After deleting the brick, I had 8 unsynced entries present and this morning I have 6.
What should I do to resolve my unsynced ?
[root@ovnode2 ~]# gluster volume status
Status of volume: datassd
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick ovnode2s.telecom.lan:/gluster_bricks/
datassd/datassd 49152 0 Y 2431
Brick ovnode3s.telecom.lan:/gluster_bricks/
datassd/datassd 49152 0 Y 2379
Self-heal Daemon on localhost N/A N/A Y 2442
Self-heal Daemon on ovnode3s.telecom.lan N/A N/A Y 2390
Task Status of Volume datassd
------------------------------------------------------------------------------
[root@ovnode2 ~]# gluster volume heal datassd info
Brick ovnode2s.telecom.lan:/gluster_bricks/datassd/datassd
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.7
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.150
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.241
/.shard/21907c8f-abe2-4501-b597-d1c2f9a0fa92.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.155
Status: Connected
Number of entries: 6
Brick ovnode3s.telecom.lan:/gluster_bricks/datassd/datassd
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.7
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.150
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.241
/.shard/21907c8f-abe2-4501-b597-d1c2f9a0fa92.18
/.shard/8d397a25-66d0-4f51-9358-5e1f70048103.155
Status: Connected
Number of entries: 6
Thank you