Hi,
I today tested the steps. Actually it worked but I needed to add a few things.
0. Gluster snapshots on all volumes
I did not need that.
1. Set a node in maintenance
2. Create a full backup of the engine
3. Set global maintenance and power off the current engine
4. Backup all gluster config files
Backup /etc/glusterfs/ /var/lib/glusterd /etc/fstab and the directories in
/gluster_bricks
5. Reinstall the node that was set to maintnenance (step 2)
6. Install glusterfs, restore the configs from step 4
Modify the /etc/fstab, create directories in /gluster_bricks, mount the bricks
I had to remove the lvm_filter to scan the logical volume group used for the bricks.
7. Restart glusterd and check that all bricks are up
I had to force the volumes to start
8. Wait for healing to end
9. Deploy the new HE on a new Gluster Volume, using the backup/restore procedure for HE
I could create a new gluster volume in the existing thin pool. During the deployment I
specified the gluster volume:
station1:/engine-new
I used the options backvolfile-server=station2:station3
10.Add the other nodes from the oVirt cluster
Actually I did not need to add the nodes. They were directly available in the engine.
11.Set EL7-based hosts to maintenance and power off
My setup is based on ovirt Nodes. I put the first node in maintenance, created a
backup of the gluster configuration and installed manually the ovirt-node 4.4. I
reinstalled the gluster setup and waited for the gluster to heal.
I then copied the ssh-key from a different node and reinstalled the node via the
webinterface.
I manually set the hosted-engine option to deploy during reinstallation.
I repeated these steps for all hosts.
12. I put the old engine gluster domain in maintenance detached and remove the domain.
Then I could remove the old storage volume as well.
In the end I was running 4.4, migrations work.