oVirt: Migration recommendation to move from CentOS Stream 8 to 9

I have inherited the admin duty to maintain a a small oVirt cluster featuring 2 nodes running oVirt 4.5.4 based on the CentOS Stream (using the official Node ISOs) running on physical machines, including a hosted engine. Sadly, the documentation is a bit on the "light side" and the official oVirt documentation can be overwhelming at times. As CentOS Stream 8 is no longer supported we would like to migrate the nodes to either CentOS Stream 9 using the NG node ISOs or to Rocky Linux 9 - but I guess using the official ISOs seems like the smoothest path. Still, I would like to know what the general steps are to migrate without setting up a entirely fresh cluster. Is it possible to migrate by setting up an CentOS Stream 9 node in the existing cluster and to deploy a new engine from there? Is it possible to migrate VMs without downtime in this process? Ideally, does anyone have some kind of step-by-step guide to get from CentOS Stream 8 to 9 using the oVirt Node ISOs? Or is it possible to upgrade the existing nodes to CentOS Stream 9 and deploy a new hosted engine from there without creating an entirely new node? And yeah, I have already searched this list for answers (https://lists.ovirt.org/archives/list/users@ovirt.org/thread/BE5BS4MJSJ4F4ME...) but that thread lacks some the mentioned details.

I have just finished a switch from oVirt 4.3 to oVirt 4.5 and CentOS 7 to Rocky 9. The main gotcha is that the latest oVirt official release is broken, you should use the nightly build. I did it without downtime, but choose to reinstall the engine on a brand new server and restored the database. For each host, I put each of them in maintenance mode, reinstalled the OS on it and then reinstalled it in oVirt.
Le 20 nov. 2024 à 18:08, dadamysus--- via Users <users@ovirt.org> a écrit :
I have inherited the admin duty to maintain a a small oVirt cluster featuring 2 nodes running oVirt 4.5.4 based on the CentOS Stream (using the official Node ISOs) running on physical machines, including a hosted engine. Sadly, the documentation is a bit on the "light side" and the official oVirt documentation can be overwhelming at times.
As CentOS Stream 8 is no longer supported we would like to migrate the nodes to either CentOS Stream 9 using the NG node ISOs or to Rocky Linux 9 - but I guess using the official ISOs seems like the smoothest path. Still, I would like to know what the general steps are to migrate without setting up a entirely fresh cluster. Is it possible to migrate by setting up an CentOS Stream 9 node in the existing cluster and to deploy a new engine from there? Is it possible to migrate VMs without downtime in this process? Ideally, does anyone have some kind of step-by-step guide to get from CentOS Stream 8 to 9 using the oVirt Node ISOs? Or is it possible to upgrade the existing nodes to CentOS Stream 9 and deploy a new hosted engine from there without creating an entirely new node?
And yeah, I have already searched this list for answers (https://lists.ovirt.org/archives/list/users@ovirt.org/thread/BE5BS4MJSJ4F4ME...) but that thread lacks some the mentioned details. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BX3DZUQ5KFL5M4...

Hey Fabrice! Thanks for getting back! Got a few more questions
I have just finished a switch from oVirt 4.3 to oVirt 4.5 and CentOS 7 to Rocky 9.
The main gotcha is that the latest oVirt official release is broken, you should use the nightly build. You mean 4.5.5? In what way is it broken?
I did it without downtime, but choose to reinstall the engine on a brand new server and restored the database. For each host, I put each of them in maintenance mode, reinstalled the OS on it and then reinstalled it in oVirt. How did you manage to do this without downtime? If you reinstalled the hosts how did you keep your VMs online?
Right now I have two nodes/hosts running on two different machines. Could I just do this: 1) Move all my VMs to one node. 2) Fully reinstall (new OS, oVirt engine) the empty host. 3) Backup the engine on the old host. 4) Import the backup on the new host. 5) Somehow get my VMs moved with or without downtime. 6) Reinstall the the second/old host. But how would I get my VMs from the old to new host in this scenario?

Le 27 nov. 2024 à 14:01, dadamysus <dadamysus@proton.me> a écrit :
Hey Fabrice! Thanks for getting back! Got a few more questions
I have just finished a switch from oVirt 4.3 to oVirt 4.5 and CentOS 7 to Rocky 9.
The main gotcha is that the latest oVirt official release is broken, you should use the nightly build. You mean 4.5.5? In what way is it broken?
I was unable to reinstall or deploy new hosts, the installation phase failed, and it was bugs in oVirt.
I did it without downtime, but choose to reinstall the engine on a brand new server and restored the database. For each host, I put each of them in maintenance mode, reinstalled the OS on it and then reinstalled it in oVirt. How did you manage to do this without downtime? If you reinstalled the hosts how did you keep your VMs online?
The engine can be down without production impact, it runs on a dedicated server with it’s own life cycle. The VM keep running on the hosts, but of course you can’t change configuration or migrate them. I was able to reinstall hosts because I have a SAN, so I could move VM from one host to the other will reinstalling it and with no VM downtime.
Right now I have two nodes/hosts running on two different machines. Could I just do this: 1) Move all my VMs to one node. 2) Fully reinstall (new OS, oVirt engine) the empty host. 3) Backup the engine on the old host. 4) Import the backup on the new host. 5) Somehow get my VMs moved with or without downtime. 6) Reinstall the the second/old host.
But how would I get my VMs from the old to new host in this scenario?
participants (3)
-
dadamysus
-
dadamysus@proton.me
-
Fabrice Bacchella