Hi,
I am interested about these steps too, for a clean an straighforward procedure.
Althought this plan looks pretty good, i am still wondering:
Step 4
Backup all gluster config files
- could you please let me know
what would be the exact location(s) of the files to be backed up ?
/etc/glusterfs/
/var/lib/glusterd/
Step 6
Install glusterfs, restore the configs from step 4
- would the configs work with this version ?
- would in theory gluster get back to a previous balanced state ?
It's like upgrading Glusterfs locally - you update the rpm, but configs remain the
same.In my case, I'm already on Gluster v7 - so it should be unnoticed. Even going
from v6 to v7 - it should not cause any issues, yet I would place the backups before
installing glusterfs at all.
The idea is to avoid removing and later readding the brick which will lead to a lot of
healing.
Step 9
Deploy the new HE on a new Gluster Volume, using the backup/restore
procedure for HE.
- this assumes to firstly create a new volume based on some
aditional new disks or lvms, right ?
Yep, you want to keep the old volume , just in case you want to revert. In case a revert
is necessary you have 2 options:
1) If the new engine has managed to power up and somehow (it should be done only after the
whole cluster is upgraded and cluster version is upped) changed the Storage domain version
-> you have to revert all snapshots which can lead to short dataloss but faster
recovery time.
2) If the engine didn't power up at all, just kill the EL8 host (to ensure the new
engine is not up) and just remove the global HA from one of the old hosts - the old engine
will power up and then you have to reinstall the host that was used for EL8 fiasco :)
I haven't done it yet, but I'm planing to do it.
As I haven't tested the following, I can't guarantee that it will work:
0. Gluster snapshots on all volumes
1. Set a node in maintenance
2. Create a full backup of the engine
3. Set global maintenance and power off the current engine
4. Backup all gluster config files
5. Reinstall the node that was set to maintnenance (step 2)
6. Install glusterfs, restore the configs from step 4
7. Restart glusterd and check that all bricks are up
8. Wait for healing to end
9. Deploy the new HE on a new Gluster Volume, using the backup/restore procedure for HE
10.Add the other nodes from the oVirt cluster
11.Set EL7-based hosts to maintenance and power off
12.Repeat steps 4-8 for the second host (step 11)
...
In the end, you can bring the CLuster Level up to 4.4 and enjoy...
Of course , you have the option to create a new setup and migrate the VMs one by one (when
downtime allows you) from the 4.3 setup to 4.4 setup.
Best Regards,
Strahil Nikolov