2a) remove-brick replica 2
-- if I understand correctly, this
will basically just reconfigure the existing volume to replicate between the 2 bricks, and
not all 3 ... is this correct?
Yep, you are kicking out the 3rd node and the volume is converted to replica 2.
Most probably the command would be gluster volume remove-brick vmstore replica 2
host3:/path/to/brick force
2b) add-brick replicat 3 arbiter 1
-- If I understand correctly,
this will reconfigure the volume (again), adding the 3rd server's storage back to the
Gluster volume, but only as an arbiter node, correct?
Yes,I would prefer to create a fresh new LV.Don't forget to raise the inode count
higher,as this one will be an arbiter brick (see previous e-mail).
Once you add via gluster volume add-brick vmstore replica 3 arbiter 1
host3:/path/to/new/brick , you will have to wait for all heals to complete
3) Now with everything healthy, the volume is now a Replica 2 /
Arbiter 1.... and I can now stop gluster on each of the 2 servers getting the storage
upgrade, rebuild the RAID on the new storage, reboot, and let gluster heal itself before
moving on to the next server.
If you rebuild the raid, you are destroying the brick, so after mounting it back, you will
need to reset-brick. If it doesn't work for some reason , you can always remove-brick
replica 1 host1:/path/to/brick arbiter:/path/to/brick and readd them with add-brick
replica 3 arbiter 1.
I had some paused VMs after raid reshaping (spinning disks) during the healing but my lab
is running on workstations, so do it in the least busiest hours and possible backups
should have completed before the reconfiguration and not exactly during the healing ;)
Best Regards,
Strahil Nikolov