Host not becoming active due to VDSM failure

Hello, I have a host that's failing to bring up VDSM, the logs don't say anything specific, but there's a Python error about DHCP on it. Is there anyone with a similar issue? [root@rhvpower ~]# systemctl status vdsmd ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: disabled) Active: inactive (dead) Jul 30 01:53:40 rhvpower.local.versatushpc.com.br systemd[1]: Dependency failed for Virtual Desktop Server Manager. Jul 30 01:53:40 rhvpower.local.versatushpc.com.br systemd[1]: vdsmd.service: Job vdsmd.service/start failed with result 'dependency'. Jul 30 12:34:12 rhvpower.local.versatushpc.com.br systemd[1]: Dependency failed for Virtual Desktop Server Manager. Jul 30 12:34:12 rhvpower.local.versatushpc.com.br systemd[1]: vdsmd.service: Job vdsmd.service/start failed with result 'dependency'. [root@rhvpower ~]# systemctl start vdsmd A dependency job for vdsmd.service failed. See 'journalctl -xe' for details. On the logs I got the following messages: ==> /var/log/vdsm/upgrade.log <== MainThread::DEBUG::2021-07-30 12:34:55,143::libvirtconnection::168::root::(get) trying to connect libvirt MainThread::INFO::2021-07-30 12:34:55,167::netconfpersistence::238::root::(_clearDisk) Clearing netconf: /var/lib/vdsm/staging/netconf MainThread::INFO::2021-07-30 12:34:55,178::netconfpersistence::188::root::(save) Saved new config RunningConfig({'ovirtmgmt': {'netmask': '255.255.255.0', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'ipaddr': '10.20.0.106', 'defaultRoute': True, 'dhcpv6': False, 'gateway': '10.20.0.1', 'mtu': 1500, 'switch': 'legacy', 'stp': False, 'bootproto': 'none', 'nameservers': ['10.20.0.1']}, 'servers': {'vlan': 172, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-other': {'vlan': 2020, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes1': {'vlan': 2021, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes3': {'vlan': 2023, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes2': {'vlan': 2022, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'nfs': {'vlan': 200, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'storage': {'vlan': 192, 'netmask': '255.255.255.240', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': False, 'ipaddr': '192.168.10.6', 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes4': {'vlan': 2024, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}}, {'bond0': {'nics': ['enP48p1s0f2', 'enP48p1s0f3'], 'options': 'mode=4', 'switch': 'legacy', 'hwaddr': '98:be:94:78:cc:72'}}, {}) to [/var/lib/vdsm/staging/netconf/nets,/var/lib/vdsm/staging/netconf/bonds,/var/lib/vdsm/staging/netconf/devices] MainThread::INFO::2021-07-30 12:34:55,179::netconfpersistence::238::root::(_clearDisk) Clearing netconf: /var/lib/vdsm/persistence/netconf MainThread::INFO::2021-07-30 12:34:55,188::netconfpersistence::188::root::(save) Saved new config PersistentConfig({'ovirtmgmt': {'netmask': '255.255.255.0', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'ipaddr': '10.20.0.106', 'defaultRoute': True, 'dhcpv6': False, 'gateway': '10.20.0.1', 'mtu': 1500, 'switch': 'legacy', 'stp': False, 'bootproto': 'none', 'nameservers': ['10.20.0.1']}, 'servers': {'vlan': 172, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-other': {'vlan': 2020, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes1': {'vlan': 2021, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes3': {'vlan': 2023, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes2': {'vlan': 2022, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'nfs': {'vlan': 200, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'storage': {'vlan': 192, 'netmask': '255.255.255.240', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': False, 'ipaddr': '192.168.10.6', 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes4': {'vlan': 2024, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}}, {'bond0': {'nics': ['enP48p1s0f2', 'enP48p1s0f3'], 'options': 'mode=4', 'switch': 'legacy', 'hwaddr': '98:be:94:78:cc:72'}}, {}) to [/var/lib/vdsm/persistence/netconf/nets,/var/lib/vdsm/persistence/netconf/bonds,/var/lib/vdsm/persistence/netconf/devices] MainThread::DEBUG::2021-07-30 12:34:55,188::cmdutils::130::root::(exec_cmd) /usr/share/openvswitch/scripts/ovs-ctl status (cwd None) ==> /var/log/vdsm/supervdsm.log <== restore-net::INFO::2021-07-30 12:34:55,924::restore_net_config::69::root::(_restore_sriov_config) Non persisted SRIOV devices found: {'0033:01:00.0', '0003:01:00.0'} restore-net::INFO::2021-07-30 12:34:55,924::restore_net_config::458::root::(restore) starting network restoration. restore-net::DEBUG::2021-07-30 12:34:55,942::restore_net_config::366::root::(_wait_for_for_all_devices_up) All devices are up. restore-net::DEBUG::2021-07-30 12:34:55,968::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) restore-net::DEBUG::2021-07-30 12:34:55,989::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 restore-net::DEBUG::2021-07-30 12:34:56,087::plugin::261::root::(_check_version_mismatch) NetworkManager version 1.30.0 restore-net::DEBUG::2021-07-30 12:34:56,088::context::144::root::(register_async) Async action: Retrieve applied config: ethernet enP48p1s0f2 started restore-net::DEBUG::2021-07-30 12:34:56,088::context::144::root::(register_async) Async action: Retrieve applied config: ethernet enP48p1s0f3 started restore-net::DEBUG::2021-07-30 12:34:56,088::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes3 started restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-other started restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes4 started restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bridge ovirtmgmt started restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bond bond0 started restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.192 started restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2020 started restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2022 started restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2024 started restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.172 started restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2021 started restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2023 started restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.200 started restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes1 started restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes2 started restore-net::DEBUG::2021-07-30 12:34:56,091::context::144::root::(register_async) Async action: Retrieve applied config: bridge nfs started restore-net::DEBUG::2021-07-30 12:34:56,091::context::144::root::(register_async) Async action: Retrieve applied config: bridge servers started restore-net::DEBUG::2021-07-30 12:34:56,091::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started restore-net::DEBUG::2021-07-30 12:34:56,092::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet enP48p1s0f2 finished restore-net::DEBUG::2021-07-30 12:34:56,093::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet enP48p1s0f3 finished restore-net::DEBUG::2021-07-30 12:34:56,093::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes3 finished restore-net::DEBUG::2021-07-30 12:34:56,094::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-other finished restore-net::DEBUG::2021-07-30 12:34:56,095::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes4 finished restore-net::DEBUG::2021-07-30 12:34:56,096::context::153::root::(finish_async) Async action: Retrieve applied config: bridge ovirtmgmt finished restore-net::DEBUG::2021-07-30 12:34:56,097::context::153::root::(finish_async) Async action: Retrieve applied config: bond bond0 finished restore-net::DEBUG::2021-07-30 12:34:56,098::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.192 finished restore-net::DEBUG::2021-07-30 12:34:56,099::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2020 finished restore-net::DEBUG::2021-07-30 12:34:56,099::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2022 finished restore-net::DEBUG::2021-07-30 12:34:56,100::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2024 finished restore-net::DEBUG::2021-07-30 12:34:56,100::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.172 finished restore-net::DEBUG::2021-07-30 12:34:56,101::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2021 finished restore-net::DEBUG::2021-07-30 12:34:56,101::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2023 finished restore-net::DEBUG::2021-07-30 12:34:56,102::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.200 finished restore-net::DEBUG::2021-07-30 12:34:56,102::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes1 finished restore-net::DEBUG::2021-07-30 12:34:56,103::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes2 finished restore-net::DEBUG::2021-07-30 12:34:56,105::context::153::root::(finish_async) Async action: Retrieve applied config: bridge nfs finished restore-net::DEBUG::2021-07-30 12:34:56,106::context::153::root::(finish_async) Async action: Retrieve applied config: bridge servers finished restore-net::DEBUG::2021-07-30 12:34:56,107::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished restore-net::ERROR::2021-07-30 12:34:56,167::restore_net_config::462::root::(restore) restoration failed. Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 460, in restore unified_restoration() File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 112, in unified_restoration classified_conf = _classify_nets_bonds_config(available_config) File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 237, in _classify_nets_bonds_config net_info = NetInfo(netswitch.configurator.netinfo()) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 323, in netinfo _netinfo = netinfo_get(vdsmnets, compatibility) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 268, in get return _get(vdsmnets) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 76, in _get extra_info.update(_get_devices_info_from_nmstate(state, devices)) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 165, in _get_devices_info_from_nmstate nmstate.get_interfaces(state, filter=devices) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 164, in <dictcomp> for ifname, ifstate in six.viewitems( File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py", line 228, in is_dhcp_enabled return util_is_dhcp_enabled(family_info) File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/bridge_util.py", line 137, in is_dhcp_enabled return family_info[InterfaceIP.ENABLED] and family_info[InterfaceIP.DHCP] KeyError: 'dhcp' The engine is fencing the machine constantly but it reboots and come back with the same issue after reboot. Thanks all.

On Fri, Jul 30, 2021 at 7:41 PM Vinícius Ferrão via Users <users@ovirt.org> wrote: ...
restore-net::ERROR::2021-07-30 12:34:56,167::restore_net_config::462::root::(restore) restoration failed. Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 460, in restore unified_restoration() File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 112, in unified_restoration classified_conf = _classify_nets_bonds_config(available_config) File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 237, in _classify_nets_bonds_config net_info = NetInfo(netswitch.configurator.netinfo()) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 323, in netinfo _netinfo = netinfo_get(vdsmnets, compatibility) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 268, in get return _get(vdsmnets) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 76, in _get extra_info.update(_get_devices_info_from_nmstate(state, devices)) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 165, in _get_devices_info_from_nmstate nmstate.get_interfaces(state, filter=devices) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 164, in <dictcomp> for ifname, ifstate in six.viewitems( File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py", line 228, in is_dhcp_enabled return util_is_dhcp_enabled(family_info) File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/bridge_util.py", line 137, in is_dhcp_enabled return family_info[InterfaceIP.ENABLED] and family_info[InterfaceIP.DHCP] KeyError: 'dhcp'
Looks like a mnstate or NetworkManager bug. You did not mention any version - are you running the latest ovirt version? Nir

On Fri, Jul 30, 2021 at 8:54 PM Nir Soffer <nsoffer@redhat.com> wrote:
restore-net::ERROR::2021-07-30 12:34:56,167::restore_net_config::462::root::(restore) restoration failed. Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 460, in restore unified_restoration() File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 112, in unified_restoration classified_conf = _classify_nets_bonds_config(available_config) File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 237, in _classify_nets_bonds_config net_info = NetInfo(netswitch.configurator.netinfo()) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py",
On Fri, Jul 30, 2021 at 7:41 PM Vinícius Ferrão via Users <users@ovirt.org> wrote: ... line 323, in netinfo
_netinfo = netinfo_get(vdsmnets, compatibility) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py",
line 268, in get
return _get(vdsmnets) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py",
line 76, in _get
extra_info.update(_get_devices_info_from_nmstate(state, devices)) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py",
line 165, in _get_devices_info_from_nmstate
nmstate.get_interfaces(state, filter=devices) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py",
line 164, in <dictcomp>
for ifname, ifstate in six.viewitems( File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py",
line 228, in is_dhcp_enabled
return util_is_dhcp_enabled(family_info) File
"/usr/lib/python3.6/site-packages/vdsm/network/nmstate/bridge_util.py", line 137, in is_dhcp_enabled
return family_info[InterfaceIP.ENABLED] and
family_info[InterfaceIP.DHCP]
KeyError: 'dhcp'
Looks like a mnstate or NetworkManager bug.
You did not mention any version - are you running the latest ovirt version?
Nir
Hi, this was a bug in vdsm, in combination with newer nmstate (>=0.3) that was fixed in version 4.40.50.3. I would suggest you upgrade past this version. Best regards, Ales -- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

Hi Ales, Nir. Sorry for the delayed answer. I didn't had the opportunity to answer it before. I'm running RHV (not RHVH) on RHEL 8.4 and on top of ppc64le. So it's not vanilla oVirt. Right now its based on: ovirt-host-4.4.1-4.el8ev.ppc64le I'm already with nmstate >= 0.3 as I can see: nmstate-1.0.2-11.el8_4.noarch VDSM in fact is old, I tried upgrading VDSM but there's a failed dependency on openvswitch: [root@rhvpower ~]# dnf update vdsm Updating Subscription Management repositories. Last metadata expiration check: 0:11:15 ago on Mon 02 Aug 2021 12:06:44 PM EDT. Error: Problem 1: package vdsm-python-4.40.70.6-1.el8ev.noarch requires vdsm-network = 4.40.70.6-1.el8ev, but none of the providers can be installed - package vdsm-4.40.70.6-1.el8ev.ppc64le requires vdsm-python = 4.40.70.6-1.el8ev, but none of the providers can be installed - package vdsm-network-4.40.70.6-1.el8ev.ppc64le requires openvswitch >= 2.11, but none of the providers can be installed - cannot install the best update candidate for package vdsm-4.40.35.1-1.el8ev.ppc64le - nothing provides openvswitch2.11 needed by rhv-openvswitch-1:2.11-7.el8ev.noarch - nothing provides openvswitch2.11 needed by ovirt-openvswitch-2.11-1.el8ev.noarch Problem 2: package vdsm-python-4.40.70.6-1.el8ev.noarch requires vdsm-network = 4.40.70.6-1.el8ev, but none of the providers can be installed - package vdsm-4.40.70.6-1.el8ev.ppc64le requires vdsm-python = 4.40.70.6-1.el8ev, but none of the providers can be installed - package vdsm-network-4.40.70.6-1.el8ev.ppc64le requires openvswitch >= 2.11, but none of the providers can be installed - cannot install the best update candidate for package vdsm-hook-vmfex-dev-4.40.35.1-1.el8ev.noarch - nothing provides openvswitch2.11 needed by rhv-openvswitch-1:2.11-7.el8ev.noarch - nothing provides openvswitch2.11 needed by ovirt-openvswitch-2.11-1.el8ev.noarch (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) Nothing seems to provide an openvswitch release that satisfies VDSM. There's no openvswitch package installed right now, nor available on the repositories: [root@rhvpower ~]# dnf install openvswitch Updating Subscription Management repositories. Last metadata expiration check: 0:15:28 ago on Mon 02 Aug 2021 12:06:44 PM EDT. Error: Problem: cannot install the best candidate for the job - nothing provides openvswitch2.11 needed by ovirt-openvswitch-2.11-1.el8ev.noarch (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) Any ideias on how to past beyond this issue? This is probably only related to ppc64le. I already opened a bugzilla about the openvswitch issue here: https://bugzilla.redhat.com/show_bug.cgi?id=1988507 Thank you all. On 2 Aug 2021, at 02:09, Ales Musil <amusil@redhat.com<mailto:amusil@redhat.com>> wrote: On Fri, Jul 30, 2021 at 8:54 PM Nir Soffer <nsoffer@redhat.com<mailto:nsoffer@redhat.com>> wrote: On Fri, Jul 30, 2021 at 7:41 PM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: ...
restore-net::ERROR::2021-07-30 12:34:56,167::restore_net_config::462::root::(restore) restoration failed. Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 460, in restore unified_restoration() File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 112, in unified_restoration classified_conf = _classify_nets_bonds_config(available_config) File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 237, in _classify_nets_bonds_config net_info = NetInfo(netswitch.configurator.netinfo()) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 323, in netinfo _netinfo = netinfo_get(vdsmnets, compatibility) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 268, in get return _get(vdsmnets) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 76, in _get extra_info.update(_get_devices_info_from_nmstate(state, devices)) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 165, in _get_devices_info_from_nmstate nmstate.get_interfaces(state, filter=devices) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 164, in <dictcomp> for ifname, ifstate in six.viewitems( File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py", line 228, in is_dhcp_enabled return util_is_dhcp_enabled(family_info) File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/bridge_util.py", line 137, in is_dhcp_enabled return family_info[InterfaceIP.ENABLED] and family_info[InterfaceIP.DHCP] KeyError: 'dhcp'
Looks like a mnstate or NetworkManager bug. You did not mention any version - are you running the latest ovirt version? Nir Hi, this was a bug in vdsm, in combination with newer nmstate (>=0.3) that was fixed in version 4.40.50.3. I would suggest you upgrade past this version. Best regards, Ales -- Ales Musil Software Engineer - RHV Network Red Hat EMEA<https://www.redhat.com/> amusil@redhat.com<mailto:amusil@redhat.com> IM: amusil [https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.png]<https://red.ht/sig>

As a followup to the mailing list. Updating the machine solved this issue. But the bugzilla still applies since it was blocking the upgrade. Thank you all. On 2 Aug 2021, at 13:22, Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: Hi Ales, Nir. Sorry for the delayed answer. I didn't had the opportunity to answer it before. I'm running RHV (not RHVH) on RHEL 8.4 and on top of ppc64le. So it's not vanilla oVirt. Right now its based on: ovirt-host-4.4.1-4.el8ev.ppc64le I'm already with nmstate >= 0.3 as I can see: nmstate-1.0.2-11.el8_4.noarch VDSM in fact is old, I tried upgrading VDSM but there's a failed dependency on openvswitch: [root@rhvpower ~]# dnf update vdsm Updating Subscription Management repositories. Last metadata expiration check: 0:11:15 ago on Mon 02 Aug 2021 12:06:44 PM EDT. Error: Problem 1: package vdsm-python-4.40.70.6-1.el8ev.noarch requires vdsm-network = 4.40.70.6-1.el8ev, but none of the providers can be installed - package vdsm-4.40.70.6-1.el8ev.ppc64le requires vdsm-python = 4.40.70.6-1.el8ev, but none of the providers can be installed - package vdsm-network-4.40.70.6-1.el8ev.ppc64le requires openvswitch >= 2.11, but none of the providers can be installed - cannot install the best update candidate for package vdsm-4.40.35.1-1.el8ev.ppc64le - nothing provides openvswitch2.11 needed by rhv-openvswitch-1:2.11-7.el8ev.noarch - nothing provides openvswitch2.11 needed by ovirt-openvswitch-2.11-1.el8ev.noarch Problem 2: package vdsm-python-4.40.70.6-1.el8ev.noarch requires vdsm-network = 4.40.70.6-1.el8ev, but none of the providers can be installed - package vdsm-4.40.70.6-1.el8ev.ppc64le requires vdsm-python = 4.40.70.6-1.el8ev, but none of the providers can be installed - package vdsm-network-4.40.70.6-1.el8ev.ppc64le requires openvswitch >= 2.11, but none of the providers can be installed - cannot install the best update candidate for package vdsm-hook-vmfex-dev-4.40.35.1-1.el8ev.noarch - nothing provides openvswitch2.11 needed by rhv-openvswitch-1:2.11-7.el8ev.noarch - nothing provides openvswitch2.11 needed by ovirt-openvswitch-2.11-1.el8ev.noarch (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) Nothing seems to provide an openvswitch release that satisfies VDSM. There's no openvswitch package installed right now, nor available on the repositories: [root@rhvpower ~]# dnf install openvswitch Updating Subscription Management repositories. Last metadata expiration check: 0:15:28 ago on Mon 02 Aug 2021 12:06:44 PM EDT. Error: Problem: cannot install the best candidate for the job - nothing provides openvswitch2.11 needed by ovirt-openvswitch-2.11-1.el8ev.noarch (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) Any ideias on how to past beyond this issue? This is probably only related to ppc64le. I already opened a bugzilla about the openvswitch issue here: https://bugzilla.redhat.com/show_bug.cgi?id=1988507 Thank you all. On 2 Aug 2021, at 02:09, Ales Musil <amusil@redhat.com<mailto:amusil@redhat.com>> wrote: On Fri, Jul 30, 2021 at 8:54 PM Nir Soffer <nsoffer@redhat.com<mailto:nsoffer@redhat.com>> wrote: On Fri, Jul 30, 2021 at 7:41 PM Vinícius Ferrão via Users <users@ovirt.org<mailto:users@ovirt.org>> wrote: ...
restore-net::ERROR::2021-07-30 12:34:56,167::restore_net_config::462::root::(restore) restoration failed. Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 460, in restore unified_restoration() File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 112, in unified_restoration classified_conf = _classify_nets_bonds_config(available_config) File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 237, in _classify_nets_bonds_config net_info = NetInfo(netswitch.configurator.netinfo()) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 323, in netinfo _netinfo = netinfo_get(vdsmnets, compatibility) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 268, in get return _get(vdsmnets) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 76, in _get extra_info.update(_get_devices_info_from_nmstate(state, devices)) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 165, in _get_devices_info_from_nmstate nmstate.get_interfaces(state, filter=devices) File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 164, in <dictcomp> for ifname, ifstate in six.viewitems( File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py", line 228, in is_dhcp_enabled return util_is_dhcp_enabled(family_info) File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/bridge_util.py", line 137, in is_dhcp_enabled return family_info[InterfaceIP.ENABLED] and family_info[InterfaceIP.DHCP] KeyError: 'dhcp'
Looks like a mnstate or NetworkManager bug. You did not mention any version - are you running the latest ovirt version? Nir Hi, this was a bug in vdsm, in combination with newer nmstate (>=0.3) that was fixed in version 4.40.50.3. I would suggest you upgrade past this version. Best regards, Ales -- Ales Musil Software Engineer - RHV Network Red Hat EMEA<https://www.redhat.com/> amusil@redhat.com<mailto:amusil@redhat.com> IM: amusil [https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.png]<https://red.ht/sig> _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/POT32F5MHZ5NCG...
participants (3)
-
Ales Musil
-
Nir Soffer
-
Vinícius Ferrão