Good Day Strahil;
The OS did survive. The OS was on a RAID1 array and the gluster bricks are on a RAID5
array. Since the time I requested guidance I did a whole lot of reading and learning. I
found in /etc/ansible several yaml files that contained the configuration of the gluster
bricks and the lvm volumes contained within them. I also found that lvm configurations can
be restored from /etc/lvm/backup and /etc/lvm/archive. All very helpfull I must say.
I re-created the RAID array and did a light initialization to wipe the array. Then I
attempted to restore the backups from /etc/lvm/backup and /etc/lvm/array. I was unable to
inspire the thinpool to enable with the "manual repair required" error each
time. As there was not enough unused space on the system to cycle out the metadata lvm I
was unable to proceed.
I then deleted the LVM configuration and re-created the LVM volumes from the /etc/ansible
yaml files I found. I made them identical to the other two nodes that were functioning in
the cluster. I have them succesfully mounted via fstab with the new UUID that were
generated when I recreated the volumes.
Based on the documents and articles I have read and studied these past few weeks, at this
point I believe that I need to re-apply the appropriate gluster-related attributes to the
correct level of the gluster mounts and attempt to start the gluster service. I have not
done this yet because I don't want to cause another outage and I want to be sure that
what I am doing will result in the afflicted gluster volumes healing and sychronising with
the healthy nodes.
1. What attributes and at what level should I apply them to the refurbished node ?
2. What problems could I encounter when I try to start the gluster service after making
the changes ?
3. Should I remove the new node from the gluster cluster and then re-add it or will it
heal in it's current refurbished state ?
I appreciate you taking time to check on the status of this situation and thank you for
any help or insight you can provide.