Hi Adrian,
You have several options:
A) If you have space on another gluster volume (or volumes) or on NFS-based storage, you
can migrate all VMs live . Once you do it, the simple way will be to stop and remove the
storage domain (from UI) and gluster volume that correspond to the problematic brick. Once
gone, you can remove the entry in oVirt for the old host and add the newly built one.Then
you can recreate your volume and migrate the data back.
B) If you don't have space you have to use a more riskier approach (usually it
shouldn't be risky, but I had bad experience in gluster v3):
- New server has same IP and hostname:
Use command line and run the 'gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH
HOSTNAME:BRICKPATH commit'
Replace VOLNAME with your volume name.
A more practical example would be:
'gluster volume reset-brick data ovirt3:/gluster_bricks/data/brick ovirt3:/gluster_
ricks/data/brick commit'
If it refuses, then you have to cleanup '/gluster_bricks/data' (which should be
empty).
Also check if the new peer has been probed via 'gluster peer status'.Check the
firewall is allowing gluster communication (you can compare it to the firewalls on another
gluster host).
The automatic healing will kick in 10 minutes (if it succeeds) and will stress the other 2
replicas, so pick your time properly.
Note: I'm not recommending you to use the 'force' option in the previous
command ... for now :)
- The new server has a different IP/hostname:
Instead of 'reset-brick' you can use 'replace-brick':
It should be like this:
gluster volume replace-brick data old-server:/path/to/brick new-server:/new/path/to/brick
commit force
In both cases check the status via:
gluster volume info VOLNAME
If your cluster is in production , I really recommend you the first option as it is less
risky and the chance for unplanned downtime will be minimal.
The 'reset-brick' in your previous e-mail shows that one of the servers is not
connected. Check peer status on all servers, if they are less than they should check for
network and/or firewall issues.
On the new node check if glusterd is enabled and running.
In order to debug - you should provide more info like 'gluster volume info' and
the peer status from each node.
Best Regards,
Strahil Nikolov
On Jun 10, 2019 20:10, Adrian Quintero <adrianquintero(a)gmail.com> wrote:
Can you let me know how to fix the gluster and missing brick?,
I tried removing it by going to "storage > Volumes > vmstore > bricks >
selected the brick
However it is showing as an unknown status (which is expected because the server was
completely wiped) so if I try to "remove", "replace brick" or
"reset brick" it wont work
If i do remove brick: Incorrect bricks selected for removal in Distributed Replicate
volume. Either all the selected bricks should be from the same sub volume or one brick
each for every sub volume!
If I try "replace brick" I cant because I dont have another server with extra
bricks/disks
And if I try "reset brick": Error while executing action Start Gluster Volume
Reset Brick: Volume reset brick commit force failed: rc=-1 out=() err=['Host
myhost1_mydomain_com not connected']
Are you suggesting to try and fix the gluster using command line?
Note that I cant "peer detach" the sever , so if I force the removal of the
bricks would I need to force downgrade to replica 2 instead of 3? what would happen to
oVirt as it only supports replica 3?
thanks again.
On Mon, Jun 10, 2019 at 12:52 PM Strahil <hunter86_bg(a)yahoo.com> wrote:
>
> Hi Adrian,
> Did you fix the issue with the gluster and the missing brick?
> If yes, try to set the 'old' host in maintenance an