oVirt 4.3.5.1 failed to configure management network on the host

Hi, I am trying to deploy hosted engine on Ovirt Node 4.3.5.1 and I am getting: ovirt failed to configure management network on the host. I am pretty sure the problem lies in my network config that is actually working quite nice. My current network config is: team0: - Switch1: bond0 (LACP): eno1,eno3 - Switch2: bond1 (LACP): eno2,eno4 Switch1 and switch2 are interconnected by a different LAG group. Current network config was tested against all kinds of failures, nic port failure, switch failure...etc...and it works. Although I am open to suggestions how to implement high availability with high throughput. By default on hosted engine deployment team0 is selected as bridge interface. Br, Mitja Logs: === supervdsm.log: https://pastebin.com/raw/bijW7uhi === vdsm.log part1: https://pastebin.com/raw/BYGsieCw === vdsm.log part2: https://pastebin.com/raw/rgzQCDjw

I tried deployment of 4.3.5.1 using teams and it didn't work. I did get into the engine using the temp url on the host, but the teams showed up as individual nics. Any changes made, like assigning a new logical network to the nic, failed and I lost connectivity. Setup as a bond instead of team before deployment worked as expected, and the bonds showed up properly in the engine. ymmv On Mon, Aug 5, 2019, 7:30 AM Mitja Pirih <mitja@pirih.si> wrote:
Hi,
I am trying to deploy hosted engine on Ovirt Node 4.3.5.1 and I am getting: ovirt failed to configure management network on the host. I am pretty sure the problem lies in my network config that is actually working quite nice.
My current network config is: team0: - Switch1: bond0 (LACP): eno1,eno3 - Switch2: bond1 (LACP): eno2,eno4
Switch1 and switch2 are interconnected by a different LAG group.
Current network config was tested against all kinds of failures, nic port failure, switch failure...etc...and it works. Although I am open to suggestions how to implement high availability with high throughput.
By default on hosted engine deployment team0 is selected as bridge interface.
Br, Mitja
Logs: === supervdsm.log: https://pastebin.com/raw/bijW7uhi === vdsm.log part1: https://pastebin.com/raw/BYGsieCw === vdsm.log part2: https://pastebin.com/raw/rgzQCDjw _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FQBNZEYKDYOHGA...

On 05. 08. 2019 21:20, Vincent Royer wrote:
I tried deployment of 4.3.5.1 using teams and it didn't work. I did get into the engine using the temp url on the host, but the teams showed up as individual nics. Any changes made, like assigning a new logical network to the nic, failed and I lost connectivity.
Setup as a bond instead of team before deployment worked as expected, and the bonds showed up properly in the engine.
ymmv
I can't use bonding, spanned over two switches. Maybe there is another way to do it, but I am burned out, anybody with an idea? The server has 4x 10Gbps nics. I need up to 20Gbps throughput in HA mode. Thanks. Br, Mitja

I also am spanned over two switches. You can use bonding, you just can't use 802.3 mode. I have MGMT bonded to two gig switches and storage bonded to two 10g switches for Gluster. Each switch has its own fw/router in HA. So we can lose either switch, either router, or any single interface or cable without interruption. On Tue, Aug 6, 2019, 12:33 AM Mitja Pirih <mitja@pirih.si> wrote:
On 05. 08. 2019 21:20, Vincent Royer wrote:
I tried deployment of 4.3.5.1 using teams and it didn't work. I did get into the engine using the temp url on the host, but the teams showed up as individual nics. Any changes made, like assigning a new logical network to the nic, failed and I lost connectivity.
Setup as a bond instead of team before deployment worked as expected, and the bonds showed up properly in the engine.
ymmv
I can't use bonding, spanned over two switches. Maybe there is another way to do it, but I am burned out, anybody with an idea?
The server has 4x 10Gbps nics. I need up to 20Gbps throughput in HA mode.
Thanks.
Br, Mitja

On 06. 08. 2019 17:08, Vincent Royer wrote:
I also am spanned over two switches. You can use bonding, you just can't use 802.3 mode.
I have MGMT bonded to two gig switches and storage bonded to two 10g switches for Gluster. Each switch has its own fw/router in HA. So we can lose either switch, either router, or any single interface or cable without interruption.
Do you achieve also load balancing without 802.3 mode? What mode do you use round robin,...? Thanks. Br, Mitja

for management you don't need the bw, just use active-backup. https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/htm... *Vincent Royer* *778-825-1057* <http://www.epicenergy.ca/> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Wed, Aug 7, 2019 at 12:19 AM Mitja Pirih <mitja@pirih.si> wrote:
On 06. 08. 2019 17:08, Vincent Royer wrote:
I also am spanned over two switches. You can use bonding, you just can't use 802.3 mode.
I have MGMT bonded to two gig switches and storage bonded to two 10g switches for Gluster. Each switch has its own fw/router in HA. So we can lose either switch, either router, or any single interface or cable without interruption.
Do you achieve also load balancing without 802.3 mode? What mode do you use round robin,...?
Thanks.
Br, Mitja
participants (2)
-
Mitja Pirih
-
Vincent Royer