Thank you. And yes, I agree, this needs to occur in a maintenance window and be done very carefully. :)
My only problem with this method is that I need to *replace* disks in the two servers.
I don't have any empty hard drive bays, so will effectively need to put a host into maintenance mode, remove the drives, and put new drives in.
I will NOT be touching the OS drives, however, as those are on their own separate RAID array.
So, essentially, it will need to look something like this:
- Put the cluster into global maintenance mode
- Put 1 host into full maintenance mode / deactivate it
- Stop gluster
- Remove the storage
- Add the new storage & reconfigure
- Start gluster
- Re-add the host to the cluster
Adding the new storage & reconfiguring is the head scratcher for me, given that I don't have room for the old hard drives + new hard drives at the same time.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Saturday, July 10th, 2021 at 5:55 AM, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi David,
any storage operation can cause unexpected situations, so always plan your activites for low traffic hours and test them on your test environment in advance.
I think it's easier if you (command line):
-verify no heals are pending. Not a single one.
- set the host to maintenance over ovirt
- remove the third node from gluster volumes (remove-brick replica 2)
- umount the bricks on the third node
- recreate a smaller LV with '-i maxpct=90 size=512' and mount it with the same options like the rest of the nodes. Usually I use 'noatime,inode64,context=system_u:object_r:glusterd_brick_t:s0'
- add this new brick (add-brick replica 3 arbiter 1) to the volume
- wait for the heals to finish
Then repeat again for each volume.
Adding the new disks should be done later.
Best Regards,
Strahil Nikolov
On Sat, Jul 10, 2021 at 3:15, David White via Users
<users@ovirt.org> wrote: