I need the deploy script to wait for fixing network configuration manually in oVirt 4.3.10

Hi, i'm planing to upgrade our production environment from oVirt 4.3 to 4.4. So i do need a fresh oVirt 4.3 installation to test the procedure before doing it in production. The command line deploy script can't handle network interfaces correctly. If i use either a single NIC oder a bond (active/passive) i get the error message "The selected network interface is not valid". If i predefine the management bridge to a running state, the deploy process goes on, but fails to activate the added host and removes the already running engine vm. The deploy process fails to synchronize the existing working network configuration with the engine configuration. I can already log in to the engine GUI and see, that the bridge "ovirtmgmt" needs to be assigned to the bonding IF, but i'm not fast enough to do so, because the deployment process ist already shutting down and erasing the vm. I do see the following ways to succeed: 1. make the depoyment process accept the given interfaces (maybe ignore errors) 2. make the deploy process wait for me to take necessary actions before checking the engine Do anyone know, how to achieve this? All i need is a running engine on hosted_storage... any other issues i can fix later. Another idea is to use one of the destined hosts as a bare metal engine, add hosts, backup the engine and use that backup for a hosted engine restore deploy, since the deploy script asks to wait after local vm is ready, but only if i do a recovery deploy. Any suggestions?

What is the name you are using for the bond ? Best Regards,Strahil Nikolov On Wed, Jan 25, 2023 at 13:17, lars.stolpe@bvg.de<lars.stolpe@bvg.de> wrote: Hi, i'm planing to upgrade our production environment from oVirt 4.3 to 4.4. So i do need a fresh oVirt 4.3 installation to test the procedure before doing it in production. The command line deploy script can't handle network interfaces correctly. If i use either a single NIC oder a bond (active/passive) i get the error message "The selected network interface is not valid". If i predefine the management bridge to a running state, the deploy process goes on, but fails to activate the added host and removes the already running engine vm. The deploy process fails to synchronize the existing working network configuration with the engine configuration. I can already log in to the engine GUI and see, that the bridge "ovirtmgmt" needs to be assigned to the bonding IF, but i'm not fast enough to do so, because the deployment process ist already shutting down and erasing the vm. I do see the following ways to succeed: 1. make the depoyment process accept the given interfaces (maybe ignore errors) 2. make the deploy process wait for me to take necessary actions before checking the engine Do anyone know, how to achieve this? All i need is a running engine on hosted_storage... any other issues i can fix later. Another idea is to use one of the destined hosts as a bare metal engine, add hosts, backup the engine and use that backup for a hosted engine restore deploy, since the deploy script asks to wait after local vm is ready, but only if i do a recovery deploy. Any suggestions? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VUPT727NY2Q6MW...

On Wed, Jan 25, 2023 at 1:17 PM <lars.stolpe@bvg.de> wrote:
Hi, i'm planing to upgrade our production environment from oVirt 4.3 to 4.4. So i do need a fresh oVirt 4.3 installation to test the procedure before doing it in production.
The command line deploy script can't handle network interfaces correctly. If i use either a single NIC oder a bond (active/passive) i get the error message "The selected network interface is not valid". If i predefine the management bridge to a running state, the deploy process goes on, but fails to activate the added host and removes the already running engine vm. The deploy process fails to synchronize the existing working network configuration with the engine configuration. I can already log in to the engine GUI and see, that the bridge "ovirtmgmt" needs to be assigned to the bonding IF, but i'm not fast enough to do so, because the deployment process ist already shutting down and erasing the vm.
I do see the following ways to succeed: 1. make the depoyment process accept the given interfaces (maybe ignore errors) 2. make the deploy process wait for me to take necessary actions before checking the engine
Do anyone know, how to achieve this?
All i need is a running engine on hosted_storage... any other issues i can fix later.
Another idea is to use one of the destined hosts as a bare metal engine, add hosts, backup the engine and use that backup for a hosted engine restore deploy, since the deploy script asks to wait after local vm is ready, but only if i do a recovery deploy.
You are right that in 4.3 it only asked whether to pause if you were restoring. But IIUC you can force it to pause by adding to your answer file 'OVEHOSTED_CORE/pauseonRestore=bool:True', regardless. Good luck and best regards, -- Didi

Thanks to both of you for your replies, the bond is named "bond2". That parameter is helpful, i did find that parameter in the 4.5 dokumetation, but was different and not recognized from 4.3. (is there a list of possible switches?) I did solve my problem by working around the problem. A bare metal engine had a similar problem with the configured Networks. I learned that at deploy the default route HAVE to be configured on the management bridge otherwise that network is not recognized as valid. Since the management network is isolated for security reasons, using the wrong configuration intentionaly for deploy, makes the repositories unreachable. However, i had a shaky moment as the deploy was declared as failed because the appliance rpm could not be deleted (remember the change of the default gateway), but luckily the deploy did not delete the running engine again. Then i could change the target network for the default route via cluster-->networks (not networks -> networks). I corrected the routing configuration and the hosted engine installation works fine now.
participants (3)
-
lars.stolpe@bvg.de
-
Strahil Nikolov
-
Yedidyah Bar David