I asked if the OS survived, as then you would need to:- Recreate the LVM structure
(it's the same on all 3 nodes)- Create the filesystem with isize=512- Permanently
mount the file system- Fix the volume in order to heal
I think that you might have fixed the first 3 and the simplest way from that point is to
use 'remove-brick' to switch into replica 2 mode:
#Get the syntaxgluster volume remove-brick help
#The failed brick won't have a pidgluster volume status <VOLUME>
#Actual replace:#Tripple Check the removed brick name!!!gluster volume remove-brick
<VOLUME >replica 2 <BRICK (HOST+MOUNT POINT)> commit force
#Now add the new brick (which might be the same name as the old one):gluster volume
add-brick <VOLUME> replica 3 <BRICK (HOST+MOUNT POINT)>
#Last but still very importantgluster volume heal <VOLUME> full
#Verifygluster volume listgluster volume status
You should be able to do everything from the UI (I personally never did it from there).
If you need to make the bricks manually, take a look at:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5...
.Disk alignment is very important on all layers.
For the mount options you can
use:noatime,context="system_u:object_r:glusterd_brick_t:s0"
Best Regards,Strahil Nikolov
On Sat, Dec 10, 2022 at 21:49, Clint Boggio<clint.boggio(a)gmail.com> wrote: Good
Day Strahil;
The OS did survive. The OS was on a RAID1 array and the gluster bricks are on a RAID5
array. Since the time I requested guidance I did a whole lot of reading and learning. I
found in /etc/ansible several yaml files that contained the configuration of the gluster
bricks and the lvm volumes contained within them. I also found that lvm configurations can
be restored from /etc/lvm/backup and /etc/lvm/archive. All very helpfull I must say.
I re-created the RAID array and did a light initialization to wipe the array. Then I
attempted to restore the backups from /etc/lvm/backup and /etc/lvm/array. I was unable to
inspire the thinpool to enable with the "manual repair required" error each
time. As there was not enough unused space on the system to cycle out the metadata lvm I
was unable to proceed.
I then deleted the LVM configuration and re-created the LVM volumes from the /etc/ansible
yaml files I found. I made them identical to the other two nodes that were functioning in
the cluster. I have them succesfully mounted via fstab with the new UUID that were
generated when I recreated the volumes.
Based on the documents and articles I have read and studied these past few weeks, at this
point I believe that I need to re-apply the appropriate gluster-related attributes to the
correct level of the gluster mounts and attempt to start the gluster service. I have not
done this yet because I don't want to cause another outage and I want to be sure that
what I am doing will result in the afflicted gluster volumes healing and sychronising with
the healthy nodes.
1. What attributes and at what level should I apply them to the refurbished node ?
2. What problems could I encounter when I try to start the gluster service after making
the changes ?
3. Should I remove the new node from the gluster cluster and then re-add it or will it
heal in it's current refurbished state ?
I appreciate you taking time to check on the status of this situation and thank you for
any help or insight you can provide.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEIP7GKIG3I...