Bond creation issue via hosted engine

Hi guys and girls, Every time I try to create a bond with my two onboard intel nic's (any type of bond) via hosted engine, it will fail with the following error "Error while executing action HostSetupNetworks: Unexpected exception", it will still get created on the node but then the engine will get the capabilities out of sync with the node and can't refresh. I will need to delete the bond manually on the node for the engine to be happy again. Anyone knows why? Here is some logs from engine log file: [root@ovirt1-engine ~]# cat /var/log/ovirt-engine/engine.log 2020-11-06 08:29:56,230+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.OvfDataUpdater] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [] Attempting to update VMs/Templates Ovf. 2020-11-06 08:29:56,236+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Before acquiring and wait lock 'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]', sharedLocks=''}' 2020-11-06 08:29:56,236+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Lock-wait acquired to object 'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]', sharedLocks=''}' 2020-11-06 08:29:56,237+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : ID: b150e472-1f45-11eb-8e70-00163e78288d Type: StoragePool 2020-11-06 08:29:56,242+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Attempting to update VM OVFs in Data Center 'Default' 2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Successfully updated VM OVFs in Data Center 'Default' 2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Attempting to update template OVFs in Data Center 'Default' 2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Successfully updated templates OVFs in Data Center 'Default' 2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Attempting to remove unneeded template/vm OVFs in Data Center 'Default' 2020-11-06 08:29:56,245+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Successfully removed unneeded template/vm OVFs in Data Center 'Default' 2020-11-06 08:29:56,245+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Lock freed to object 'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]', sharedLocks=''}' 2020-11-06 08:29:56,284+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] START, GetStorageDeviceListVDSCommand(HostName = ovirtn3.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'}), log id: 59a1c9e 2020-11-06 08:29:56,284+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22514) [] START, GetStorageDeviceListVDSCommand(HostName = ovirtn2.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='a4904c7c-92d7-4e4f-adf7-755f3c17335d'}), log id: 735f6d30 2020-11-06 08:29:57,020+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}] 2020-11-06 08:29:57,020+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}] 2020-11-06 08:29:57,020+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Failed in 'GetStorageDeviceListVDS' method 2020-11-06 08:29:57,020+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}] 2020-11-06 08:29:57,057+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-22513) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirtn3.5ervers.lan command GetStorageDeviceListVDS failed: Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"} 2020-11-06 08:29:57,057+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Command 'org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand' return value 'StorageDeviceListReturn:{status='Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}]'}' 2020-11-06 08:29:57,057+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] HostName = ovirtn3.5ervers.lan 2020-11-06 08:29:57,057+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Command 'GetStorageDeviceListVDSCommand(HostName = ovirtn3.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'})' execution failed: VDSGenericException: VDSErrorException: Failed to GetStorageDeviceListVDS, error = Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}, code = -32603 2020-11-06 08:29:57,057+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] FINISH, GetStorageDeviceListVDSCommand, return: , log id: 59a1c9e 2020-11-06 08:29:57,057+01 ERROR [org.ovirt.engine.core.bll.gluster.StorageDeviceSyncJob] (EE-ManagedThreadFactory-engine-Thread-22513) [] Exception retriving storage device from vds EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetStorageDeviceListVDS, error = Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}, code = -32603 (Failed with error unexpected and code 16) 2020-11-06 08:29:57,433+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22514) [] FINISH, GetStorageDeviceListVDSCommand, return: [StorageDevice:{id='null', name='ST4000NM0033-9ZM170_Z1Z8JT3C', devUuid='null', fsUuid='null', vdsId='null', description='ST4000NM0033-9ZM (dm-multipath)', devType='ATA', devPath='/dev/mapper/ST4000NM0033-9ZM170_Z1Z8JT3C', fsType='null', mountPoint='null', size='3815447', canCreateBrick='true', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda', devUuid='9TjMR1-Mk1p-fS6M-w6sP-jEfx-FmPe-A2L5iO', fsUuid='null', vdsId='null', description='lvmvg', devType='ATA', devPath='/dev/gluster_vg_sda', fsType='null', mountPoint='null', size='3815444', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_lv_data', devUuid='FuEGn2-BVzA-JDyB-fZPY-BlsU-QYO4-xFVmD0', fsUuid='3d5d228e-ece8-4e59-92ae-c1eb6094e493', vdsId='null', desc ription='lvmthinlv', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_lv_data', fsType='xfs', mountPoint='null', size='51200', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_lv_engine', devUuid='i2p1Ps-Ix6M-hmJG-VFqJ-2zVE-w14m-71j7KW', fsUuid='6280a49f-f49b-49f6-bc92-139834aae959', vdsId='null', description='lvmlv', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_lv_engine', fsType='xfs', mountPoint='null', size='102400', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_lv_vmstore', devUuid='DggrnE-C127-U0la-XoyW-eA2w-LOzk-82UmrT', fsUuid='1349ed39-21ed-418a-a2c9-26e67f403d11', vdsId='null', description='lvmthinlv', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_lv_vmstore', fsType='xfs', mountPoint='null', size='3225600', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_thinpool_glu ster_vg_sda', devUuid='x0Sy1g-WRtr-ZODw-J1il-wJcv-MbKM-Hrfcr9', fsUuid='null', vdsId='null', description='lvmthinpool', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_thinpool_gluster_vg_sda', fsType='null', mountPoint='null', size='3680660', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sda', devUuid='null', fsUuid='fgvWOq-CB7k-qIjJ-iY15-ljEK-aaFG-CvRLQW', vdsId='null', description='ST4000NM0033-9ZM (disk)', devType='ATA', devPath='/dev/sda', fsType='lvmpv', mountPoint='null', size='3815447', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sdb', devUuid='null', fsUuid='null', vdsId='null', description='ST4000NM0033-9ZM (disk)', devType='ATA', devPath='/dev/sdb', fsType='multipath_member', mountPoint='null', size='3815447', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sdc', devUuid='null', fsUuid='545ca4c8', vdsId='null', description='Ultra Fit (disk)', devType='USB', devPath='/dev/sdc', fsType='disklabel', mountPoint='null', size='14664', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sdc1', devUuid='null', fsUuid='null', vdsId='null', description='partition', devType='USB', devPath='/dev/sdc1', fsType='zfs_member', mountPoint='null', size='14663', canCreateBrick='true', isGlusterBrick='false'}], log id: 735f6d30 2020-11-06 08:30:00,403+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetLldpVDSCommand] (default task-71) [474a11f9-ea66-4fd2-bad3-488ce157d197] START, GetLldpVDSCommand(HostName = ovirtn1.5ervers.lan, GetLldpVDSCommandParameters:{hostId='285fc148-62ed-4243-8106-ed01eff28295'}), log id: 66e6c0c1 2020-11-06 08:30:00,522+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetLldpVDSCommand] (default task-71) [474a11f9-ea66-4fd2-bad3-488ce157d197] FINISH, GetLldpVDSCommand, return: {enp0s29u1u1=LldpInfo:{enabled='false', tlvs='[]'}, eno1=LldpInfo:{enabled='false', tlvs='[]'}, enp9s0=LldpInfo:{enabled='true', tlvs='[]'}}, log id: 66e6c0c1 2020-11-06 08:30:05,665+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] START, GlusterServersListVDSCommand(HostName = ovirtn3.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'}), log id: 7f602482 2020-11-06 08:30:05,859+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] FINISH, GlusterServersListVDSCommand, return: [192.168.4.128/24:CONNECTED, ovirtn2.5ervers.lan:CONNECTED, ovirtn1.5ervers.lan:CONNECTED], log id: 7f602482 2020-11-06 08:30:05,862+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] START, GlusterVolumesListVDSCommand(HostName = ovirtn3.5ervers.lan, GlusterVolumesListVDSParameters:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'}), log id: 7f8c7e41 2020-11-06 08:30:05,968+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] FINISH, GlusterVolumesListVDSCommand, return: {95817a31-0623-44ed-a8bc-d2f37af7644d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ac351e, 03874a87-1e23-4e54-8445-159ba27b48fe=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@25915a3c, 90de405f-60f0-401a-8bdf-7203d8db21f3=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@56a4cd2d}, log id: 7f8c7e41 2020-11-06 08:30:07,416+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Before acquiring lock-timeout 'EngineLock:{exclusiveLocks='[HOST_NETWORK285fc148-62ed-4243-8106-ed01eff28295=HOST_NETWORK]', sharedLocks=''}' 2020-11-06 08:30:07,416+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Lock-timeout acquired to object 'EngineLock:{exclusiveLocks='[HOST_NETWORK285fc148-62ed-4243-8106-ed01eff28295=HOST_NETWORK]', sharedLocks=''}' 2020-11-06 08:30:07,464+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Running command: HostSetupNetworksCommand internal: false. Entities affected : ID: 285fc148-62ed-4243-8106-ed01eff28295 Type: VDSAction group CONFIGURE_HOST_NETWORK with role type ADMIN 2020-11-06 08:30:07,464+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Before acquiring lock in order to prevent monitoring for host 'ovirtn1.5ervers.lan' from data-center 'Default' 2020-11-06 08:30:07,464+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Lock acquired, from now a monitoring of host will be skipped for host 'ovirtn1.5ervers.lan' from data-center 'Default' 2020-11-06 08:30:07,466+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] START, HostSetupNetworksVDSCommand(HostName = ovirtn1.5ervers.lan, HostSetupNetworksVdsCommandParameters:{hostId='285fc148-62ed-4243-8106-ed01eff28295', vds='Host[ovirtn1.5ervers.lan,285fc148-62ed-4243-8106-ed01eff28295]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[]', removedNetworks='[]', bonds='[CreateOrUpdateBond:{id='null', name='bond0', bondingOptions='mode=4 miimon=100 xmit_hash_policy=2', slaves='[eno1, enp9s0]'}]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='false'}), log id: 695498c2 2020-11-06 08:30:07,466+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] FINISH, HostSetupNetworksVDSCommand, return: , log id: 695498c2 2020-11-06 08:30:12,118+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}] 2020-11-06 08:30:12,119+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Failed in 'GetCapabilitiesVDS' method 2020-11-06 08:30:12,119+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}] 2020-11-06 08:30:12,164+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirtn1.5ervers.lan command GetCapabilitiesVDS failed: Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"} 2020-11-06 08:30:12,164+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand' return value 'org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturn@5ccc140e' 2020-11-06 08:30:12,164+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] HostName = ovirtn1.5ervers.lan 2020-11-06 08:30:12,164+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Command 'GetCapabilitiesVDSCommand(HostName = ovirtn1.5ervers.lan, VdsIdAndVdsVDSCommandParametersBase:{hostId='285fc148-62ed-4243-8106-ed01eff28295', vds='Host[ovirtn1.5ervers.lan,285fc148-62ed-4243-8106-ed01eff28295]'})' execution failed: VDSGenericException: VDSErrorException: Failed to GetCapabilitiesVDS, error = Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}, code = -32603 2020-11-06 08:30:12,165+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Host setup networks finished. Lock released. Monitoring can run now for host 'ovirtn1.5ervers.lan' from data-center 'Default' 2020-11-06 08:30:12,165+01 ERROR [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Command 'org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetCapabilitiesVDS, error = Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}, code = -32603 (Failed with error unexpected and code 16) 2020-11-06 08:30:12,222+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Lock freed to object 'EngineLock:{exclusiveLocks='[HOST_NETWORK285fc148-62ed-4243-8106-ed01eff28295=HOST_NETWORK]', sharedLocks=''}'

On Fri, Nov 6, 2020 at 9:09 AM Harry O <harryo.dk@gmail.com> wrote:
Hi guys and girls,
Every time I try to create a bond with my two onboard intel nic's (any type of bond) via hosted engine, it will fail with the following error "Error while executing action HostSetupNetworks: Unexpected exception", it will still get created on the node but then the engine will get the capabilities out of sync with the node and can't refresh. I will need to delete the bond manually on the node for the engine to be happy again. Anyone knows why?
Here is some logs from engine log file:
[root@ovirt1-engine ~]# cat /var/log/ovirt-engine/engine.log 2020-11-06 08:29:56,230+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.OvfDataUpdater] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [] Attempting to update VMs/Templates Ovf. 2020-11-06 08:29:56,236+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Before acquiring and wait lock 'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]', sharedLocks=''}' 2020-11-06 08:29:56,236+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Lock-wait acquired to object 'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]', sharedLocks=''}' 2020-11-06 08:29:56,237+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : ID: b150e472-1f45-11eb-8e70-00163e78288d Type: StoragePool 2020-11-06 08:29:56,242+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Attempting to update VM OVFs in Data Center 'Default' 2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Successfully updated VM OVFs in Data Center 'Default' 2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Attempting to update template OVFs in Data Center 'Default' 2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Successfully updated templates OVFs in Data Center 'Default' 2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Attempting to remove unneeded template/vm OVFs in Data Center 'Default' 2020-11-06 08:29:56,245+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Successfully removed unneeded template/vm OVFs in Data Center 'Default' 2020-11-06 08:29:56,245+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Lock freed to object 'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]', sharedLocks=''}' 2020-11-06 08:29:56,284+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] START, GetStorageDeviceListVDSCommand(HostName = ovirtn3.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'}), log id: 59a1c9e 2020-11-06 08:29:56,284+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22514) [] START, GetStorageDeviceListVDSCommand(HostName = ovirtn2.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='a4904c7c-92d7-4e4f-adf7-755f3c17335d'}), log id: 735f6d30 2020-11-06 08:29:57,020+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}] 2020-11-06 08:29:57,020+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}] 2020-11-06 08:29:57,020+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Failed in 'GetStorageDeviceListVDS' method 2020-11-06 08:29:57,020+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}] 2020-11-06 08:29:57,057+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-22513) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirtn3.5ervers.lan command GetStorageDeviceListVDS failed: Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"} 2020-11-06 08:29:57,057+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Command 'org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand' return value 'StorageDeviceListReturn:{status='Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}]'}' 2020-11-06 08:29:57,057+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] HostName = ovirtn3.5ervers.lan 2020-11-06 08:29:57,057+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Command 'GetStorageDeviceListVDSCommand(HostName = ovirtn3.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'})' execution failed: VDSGenericException: VDSErrorException: Failed to GetStorageDeviceListVDS, error = Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}, code = -32603 2020-11-06 08:29:57,057+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] FINISH, GetStorageDeviceListVDSCommand, return: , log id: 59a1c9e 2020-11-06 08:29:57,057+01 ERROR [org.ovirt.engine.core.bll.gluster.StorageDeviceSyncJob] (EE-ManagedThreadFactory-engine-Thread-22513) [] Exception retriving storage device from vds EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetStorageDeviceListVDS, error = Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}, code = -32603 (Failed with error unexpected and code 16) 2020-11-06 08:29:57,433+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22514) [] FINISH, GetStorageDeviceListVDSCommand, return: [StorageDevice:{id='null', name='ST4000NM0033-9ZM170_Z1Z8JT3C', devUuid='null', fsUuid='null', vdsId='null', description='ST4000NM0033-9ZM (dm-multipath)', devType='ATA', devPath='/dev/mapper/ST4000NM0033-9ZM170_Z1Z8JT3C', fsType='null', mountPoint='null', size='3815447', canCreateBrick='true', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda', devUuid='9TjMR1-Mk1p-fS6M-w6sP-jEfx-FmPe-A2L5iO', fsUuid='null', vdsId='null', description='lvmvg', devType='ATA', devPath='/dev/gluster_vg_sda', fsType='null', mountPoint='null', size='3815444', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_lv_data', devUuid='FuEGn2-BVzA-JDyB-fZPY-BlsU-QYO4-xFVmD0', fsUuid='3d5d228e-ece8-4e59-92ae-c1eb6094e493', vdsId='null', desc ription='lvmthinlv', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_lv_data', fsType='xfs', mountPoint='null', size='51200', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_lv_engine', devUuid='i2p1Ps-Ix6M-hmJG-VFqJ-2zVE-w14m-71j7KW', fsUuid='6280a49f-f49b-49f6-bc92-139834aae959', vdsId='null', description='lvmlv', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_lv_engine', fsType='xfs', mountPoint='null', size='102400', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_lv_vmstore', devUuid='DggrnE-C127-U0la-XoyW-eA2w-LOzk-82UmrT', fsUuid='1349ed39-21ed-418a-a2c9-26e67f403d11', vdsId='null', description='lvmthinlv', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_lv_vmstore', fsType='xfs', mountPoint='null', size='3225600', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_thinpool_glu ster_vg_sda', devUuid='x0Sy1g-WRtr-ZODw-J1il-wJcv-MbKM-Hrfcr9', fsUuid='null', vdsId='null', description='lvmthinpool', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_thinpool_gluster_vg_sda', fsType='null', mountPoint='null', size='3680660', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sda', devUuid='null', fsUuid='fgvWOq-CB7k-qIjJ-iY15-ljEK-aaFG-CvRLQW', vdsId='null', description='ST4000NM0033-9ZM (disk)', devType='ATA', devPath='/dev/sda', fsType='lvmpv', mountPoint='null', size='3815447', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sdb', devUuid='null', fsUuid='null', vdsId='null', description='ST4000NM0033-9ZM (disk)', devType='ATA', devPath='/dev/sdb', fsType='multipath_member', mountPoint='null', size='3815447', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sdc', devUuid='null', fsUuid='545ca4c8', vdsId='null', description='Ultra Fit (disk)', devType='USB', devPath='/dev/sdc', fsType='disklabel', mountPoint='null', size='14664', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sdc1', devUuid='null', fsUuid='null', vdsId='null', description='partition', devType='USB', devPath='/dev/sdc1', fsType='zfs_member', mountPoint='null', size='14663', canCreateBrick='true', isGlusterBrick='false'}], log id: 735f6d30 2020-11-06 08:30:00,403+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetLldpVDSCommand] (default task-71) [474a11f9-ea66-4fd2-bad3-488ce157d197] START, GetLldpVDSCommand(HostName = ovirtn1.5ervers.lan, GetLldpVDSCommandParameters:{hostId='285fc148-62ed-4243-8106-ed01eff28295'}), log id: 66e6c0c1 2020-11-06 08:30:00,522+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetLldpVDSCommand] (default task-71) [474a11f9-ea66-4fd2-bad3-488ce157d197] FINISH, GetLldpVDSCommand, return: {enp0s29u1u1=LldpInfo:{enabled='false', tlvs='[]'}, eno1=LldpInfo:{enabled='false', tlvs='[]'}, enp9s0=LldpInfo:{enabled='true', tlvs='[]'}}, log id: 66e6c0c1 2020-11-06 08:30:05,665+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] START, GlusterServersListVDSCommand(HostName = ovirtn3.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'}), log id: 7f602482 2020-11-06 08:30:05,859+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] FINISH, GlusterServersListVDSCommand, return: [192.168.4.128/24:CONNECTED, ovirtn2.5ervers.lan:CONNECTED, ovirtn1.5ervers.lan:CONNECTED], log id: 7f602482 2020-11-06 08:30:05,862+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] START, GlusterVolumesListVDSCommand(HostName = ovirtn3.5ervers.lan, GlusterVolumesListVDSParameters:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'}), log id: 7f8c7e41 2020-11-06 08:30:05,968+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] FINISH, GlusterVolumesListVDSCommand, return: {95817a31-0623-44ed-a8bc-d2f37af7644d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ac351e, 03874a87-1e23-4e54-8445-159ba27b48fe=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@25915a3c, 90de405f-60f0-401a-8bdf-7203d8db21f3=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@56a4cd2d}, log id: 7f8c7e41 2020-11-06 08:30:07,416+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Before acquiring lock-timeout 'EngineLock:{exclusiveLocks='[HOST_NETWORK285fc148-62ed-4243-8106-ed01eff28295=HOST_NETWORK]', sharedLocks=''}' 2020-11-06 08:30:07,416+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Lock-timeout acquired to object 'EngineLock:{exclusiveLocks='[HOST_NETWORK285fc148-62ed-4243-8106-ed01eff28295=HOST_NETWORK]', sharedLocks=''}' 2020-11-06 08:30:07,464+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Running command: HostSetupNetworksCommand internal: false. Entities affected : ID: 285fc148-62ed-4243-8106-ed01eff28295 Type: VDSAction group CONFIGURE_HOST_NETWORK with role type ADMIN 2020-11-06 08:30:07,464+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Before acquiring lock in order to prevent monitoring for host 'ovirtn1.5ervers.lan' from data-center 'Default' 2020-11-06 08:30:07,464+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Lock acquired, from now a monitoring of host will be skipped for host 'ovirtn1.5ervers.lan' from data-center 'Default' 2020-11-06 08:30:07,466+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] START, HostSetupNetworksVDSCommand(HostName = ovirtn1.5ervers.lan, HostSetupNetworksVdsCommandParameters:{hostId='285fc148-62ed-4243-8106-ed01eff28295', vds='Host[ovirtn1.5ervers.lan,285fc148-62ed-4243-8106-ed01eff28295]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[]', removedNetworks='[]', bonds='[CreateOrUpdateBond:{id='null', name='bond0', bondingOptions='mode=4 miimon=100 xmit_hash_policy=2', slaves='[eno1, enp9s0]'}]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='false'}), log id: 695498c2 2020-11-06 08:30:07,466+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] FINISH, HostSetupNetworksVDSCommand, return: , log id: 695498c2 2020-11-06 08:30:12,118+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}] 2020-11-06 08:30:12,119+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Failed in 'GetCapabilitiesVDS' method 2020-11-06 08:30:12,119+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}] 2020-11-06 08:30:12,164+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirtn1.5ervers.lan command GetCapabilitiesVDS failed: Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"} 2020-11-06 08:30:12,164+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand' return value 'org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturn@5ccc140e' 2020-11-06 08:30:12,164+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] HostName = ovirtn1.5ervers.lan 2020-11-06 08:30:12,164+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Command 'GetCapabilitiesVDSCommand(HostName = ovirtn1.5ervers.lan, VdsIdAndVdsVDSCommandParametersBase:{hostId='285fc148-62ed-4243-8106-ed01eff28295', vds='Host[ovirtn1.5ervers.lan,285fc148-62ed-4243-8106-ed01eff28295]'})' execution failed: VDSGenericException: VDSErrorException: Failed to GetCapabilitiesVDS, error = Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}, code = -32603 2020-11-06 08:30:12,165+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Host setup networks finished. Lock released. Monitoring can run now for host 'ovirtn1.5ervers.lan' from data-center 'Default' 2020-11-06 08:30:12,165+01 ERROR [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Command 'org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetCapabilitiesVDS, error = Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}, code = -32603 (Failed with error unexpected and code 16) 2020-11-06 08:30:12,222+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Lock freed to object 'EngineLock:{exclusiveLocks='[HOST_NETWORK285fc148-62ed-4243-8106-ed01eff28295=HOST_NETWORK]', sharedLocks=''}' _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QOZTXAYAZAMLJT...
Hello, can you please provide relevant part of the supervdsm.log from the affected host? Thank you. Regards, Ales -- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

Here it is cat /var/log/vdsm/supervdsm.log MainProcess|mpathhealth::DEBUG::2020-11-06 09:49:53,528::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call dmsetup_run_status with ('multipath',) {} MainProcess|mpathhealth::DEBUG::2020-11-06 09:49:53,528::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-23 /usr/sbin/dmsetup status --target multipath (cwd None) MainProcess|mpathhealth::DEBUG::2020-11-06 09:49:53,559::commands::98::common.commands::(run) SUCCESS: <err> = b''; <rc> = 0 MainProcess|mpathhealth::DEBUG::2020-11-06 09:49:53,559::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) return dmsetup_run_status with b'ST4000NM0033-9ZM170_Z1Z8JNPX: 0 7814037168 multipath 2 0 0 0 1 1 A 0 1 2 8:16 A 0 0 1 \n' MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,880::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call setupNetworks with ({}, {'bond0': {'nics': ['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}}, {'connectivityTimeout': 120, 'commitOnSuccess': True, 'connectivityCheck': 'true'}) {} MainProcess|jsonrpc/1::INFO::2020-11-06 09:49:59,880::api::220::root::(setupNetworks) Setting up network according to configuration: networks:{}, bondings:{'bond0': {'nics': ['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}}, options:{'connectivityTimeout': 120, 'commitOnSuccess': True, 'connectivityCheck': 'true'} MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,887::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,888::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,889::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,896::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,897::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev enp0s29u1u1 classid 0:1388 (cwd None) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,902::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,955::vsctl::74::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,955::cmdutils::130::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,968::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/1::INFO::2020-11-06 09:49:59,976::netconfpersistence::58::root::(setNetwork) Adding network ovirtmgmt({'bridged': True, 'stp': False, 'mtu': 1500, 'nic': 'enp0s29u1u1', 'defaultRoute': True, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': 'IP-ADDR126', 'netmask': '255.255.255.0', 'gateway': 'IP-ADDR1', 'ipv6addr': '2001:470:df4e:2:fe4d:d4ff:fe3e:fb86/64', 'ipv6gateway': 'fe80::250:56ff:fe8b:6e21', 'switch': 'legacy', 'nameservers': ['IP-ADDR4']}) MainProcess|jsonrpc/1::INFO::2020-11-06 09:49:59,977::netconfpersistence::69::root::(setBonding) Adding bond0({'nics': ['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,979::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-23 /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe (cwd None) MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:00,355::hooks::122::root::(_runHooksDir) /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe: rc=0 err=b'' MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:00,356::configurator::195::root::(_setup_nmstate) Processing setup through nmstate MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:00,384::configurator::197::root::(_setup_nmstate) Desired state: {'interfaces': [{'name': 'bond0', 'type': 'bond', 'state': 'up', 'link-aggregation': {'slaves': ['eno1', 'enp9s0'], 'options': {'miimon': '100', 'xmit_hash_policy': '2'}, 'mode': '802.3ad'}}, {'name': 'ovirtmgmt', 'mtu': 1500}]} MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,439::checkpoint::121::root::(create) Checkpoint /org/freedesktop/NetworkManager/Checkpoint/40 created for all devices: 60 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,439::netapplier::239::root::(_add_interfaces) Adding new interfaces: ['bond0'] MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,442::netapplier::251::root::(_edit_interfaces) Editing interfaces: ['ovirtmgmt', 'eno1', 'enp9s0'] MainProcess|jsonrpc/1::WARNING::2020-11-06 09:50:00,443::ipv6::188::root::(_set_static) IPv6 link local address fe80::64c3:73ff:fe2f:10d7/64 is ignored when applying desired state MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,446::nmclient::136::root::(execute_next_action) Executing NM action: func=add_connection_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,459::connection::329::root::(_add_connection_callback) Connection adding succeeded: dev=bond0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,459::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_delete_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,472::connection::355::root::(_delete_connection_callback) Connection deletion succeeded: dev=eno1 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,472::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_delete_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,476::connection::355::root::(_delete_connection_callback) Connection deletion succeeded: dev=enp9s0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,476::nmclient::136::root::(execute_next_action) Executing NM action: func=add_connection_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,485::connection::329::root::(_add_connection_callback) Connection adding succeeded: dev=eno1 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,485::nmclient::136::root::(execute_next_action) Executing NM action: func=add_connection_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,491::connection::329::root::(_add_connection_callback) Connection adding succeeded: dev=enp9s0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,491::nmclient::136::root::(execute_next_action) Executing NM action: func=commit_changes_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,498::connection::386::root::(_commit_changes_callback) Connection update succeeded: dev=ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,499::nmclient::136::root::(execute_next_action) Executing NM action: func=safe_activate_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,508::connection::215::root::(_active_connection_callback) Connection activation initiated: dev=bond0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,548::connection::301::root::(_waitfor_active_connection_callback) Connection activation succeeded: dev=bond0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_IP_CONFIG of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_IS_MASTER | NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_MASTER_HAS_SLAVES of type NM.ActivationStateFlags> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,549::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,554::device::149::root::(_modify_callback) Device reapply succeeded: dev=ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,554::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,559::device::143::root::(_modify_callback) Device reapply failed on enp9s0: error=nm-device-error-quark: Can't reapply changes to 'connection.autoconnect-slaves' setting (3) Fallback to device activation MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,116::connection::215::root::(_active_connection_callback) Connection activation initiated: dev=enp9s0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,265::connection::301::root::(_waitfor_active_connection_callback) Connection activation succeeded: dev=enp9s0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATED of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_ACTIVATED of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_IS_SLAVE | NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_IP4_READY | NM_ACTIVATION_STATE_FLAG_IP6_READY of type NM.ActivationStateFlags> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,266::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,269::device::143::root::(_modify_callback) Device reapply failed on eno1: error=nm-device-error-quark: Can't reapply changes to 'connection.autoconnect-slaves' setting (3) Fallback to device activation MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,598::connection::215::root::(_active_connection_callback) Connection activation initiated: dev=eno1, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,946::connection::301::root::(_waitfor_active_connection_callback) Connection activation succeeded: dev=eno1, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATED of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_ACTIVATED of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_IS_SLAVE | NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_IP4_READY | NM_ACTIVATION_STATE_FLAG_IP6_READY of type NM.ActivationStateFlags> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,946::nmclient::139::root::(execute_next_action) NM action queue exhausted, quiting mainloop MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,985::checkpoint::156::root::(destroy) Checkpoint /org/freedesktop/NetworkManager/Checkpoint/40 destroyed MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,991::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,997::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,998::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:02,004::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:02,004::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev enp0s29u1u1 classid 0:1388 (cwd None) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:02,010::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:02,075::netconfpersistence::69::root::(setBonding) Adding bond0({'nics': ['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}) MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:02,075::netconfpersistence::238::root::(_clearDisk) Clearing netconf: /var/lib/vdsm/staging/netconf MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:02,078::netconfpersistence::188::root::(save) Saved new config RunningConfig({'ovirtmgmt': {'ipv6autoconf': False, 'nic': 'eno1', 'defaultRoute': True, 'ipv6gateway': 'fe80::250:56ff:fe8b:6e21', 'mtu': 1500, 'ipv6addr': '2001:470:df4e:2:fe4d:d4ff:fe3e:fb86/64', 'switch': 'legacy', 'netmask': '255.255.255.0', 'bridged': True, 'ipaddr': 'IP-ADDR126', 'dhcpv6': False, 'gateway': 'IP-ADDR1', 'stp': False, 'bootproto': 'none', 'nameservers': ['IP-ADDR4']}}, {'bond0': {'nics': ['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}}, {}) to [/var/lib/vdsm/staging/netconf/nets,/var/lib/vdsm/staging/netconf/bonds,/var/lib/vdsm/staging/netconf/devices] MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:02,078::connectivity::47::root::(check) Checking connectivity... MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:03,081::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-23 /usr/libexec/vdsm/hooks/after_network_setup/30_ethtool_options (cwd None) MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:03,255::hooks::122::root::(_runHooksDir) /usr/libexec/vdsm/hooks/after_network_setup/30_ethtool_options: rc=0 err=b'' MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:03,255::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) return setupNetworks with None MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,419::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call network_caps with () {} MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,426::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,431::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,432::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,439::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,440::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev enp0s29u1u1 classid 0:1388 (cwd None) MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,446::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,510::vsctl::74::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,510::cmdutils::130::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None) MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,522::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 netlink/events::DEBUG::2020-11-06 09:50:03,529::concurrent::258::root::(run) START thread <Thread(netlink/events, started daemon 140427791193856)> (func=<bound method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object at 0x7fb7e4a12828>>, args=(), kwargs={}) MainProcess|mpathhealth::DEBUG::2020-11-06 09:50:03,560::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call dmsetup_run_status with ('multipath',) {} MainProcess|mpathhealth::DEBUG::2020-11-06 09:50:03,560::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-23 /usr/sbin/dmsetup status --target multipath (cwd None) MainProcess|mpathhealth::DEBUG::2020-11-06 09:50:03,591::commands::98::common.commands::(run) SUCCESS: <err> = b''; <rc> = 0 MainProcess|mpathhealth::DEBUG::2020-11-06 09:50:03,591::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) return dmsetup_run_status with b'ST4000NM0033-9ZM170_Z1Z8JNPX: 0 7814037168 multipath 2 0 0 0 1 1 A 0 1 2 8:16 A 0 0 1 \n' netlink/events::DEBUG::2020-11-06 09:50:05,529::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 140427791193856)> MainProcess|jsonrpc/3::ERROR::2020-11-06 09:50:05,533::supervdsm_server::97::SuperVdsm.ServerCallback::(wrapper) Error in network_caps Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 95, in wrapper res = func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 59, in network_caps return netswitch.configurator.netcaps(compatibility=30600) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 505, in netcaps _add_speed_device_info(net_caps) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 546, in _add_speed_device_info devattr['speed'] = bond.speed(devname) TypeError: 'module' object is not callable

On Fri, Nov 6, 2020 at 9:59 AM Harry O <harryo.dk@gmail.com> wrote:
Here it is cat /var/log/vdsm/supervdsm.log MainProcess|mpathhealth::DEBUG::2020-11-06 09:49:53,528::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call dmsetup_run_status with ('multipath',) {} MainProcess|mpathhealth::DEBUG::2020-11-06 09:49:53,528::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-23 /usr/sbin/dmsetup status --target multipath (cwd None) MainProcess|mpathhealth::DEBUG::2020-11-06 09:49:53,559::commands::98::common.commands::(run) SUCCESS: <err> = b''; <rc> = 0 MainProcess|mpathhealth::DEBUG::2020-11-06 09:49:53,559::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) return dmsetup_run_status with b'ST4000NM0033-9ZM170_Z1Z8JNPX: 0 7814037168 multipath 2 0 0 0 1 1 A 0 1 2 8:16 A 0 0 1 \n' MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,880::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call setupNetworks with ({}, {'bond0': {'nics': ['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}}, {'connectivityTimeout': 120, 'commitOnSuccess': True, 'connectivityCheck': 'true'}) {} MainProcess|jsonrpc/1::INFO::2020-11-06 09:49:59,880::api::220::root::(setupNetworks) Setting up network according to configuration: networks:{}, bondings:{'bond0': {'nics': ['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}}, options:{'connectivityTimeout': 120, 'commitOnSuccess': True, 'connectivityCheck': 'true'} MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,887::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,888::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,889::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,896::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,897::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev enp0s29u1u1 classid 0:1388 (cwd None) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,902::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,955::vsctl::74::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,955::cmdutils::130::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,968::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/1::INFO::2020-11-06 09:49:59,976::netconfpersistence::58::root::(setNetwork) Adding network ovirtmgmt({'bridged': True, 'stp': False, 'mtu': 1500, 'nic': 'enp0s29u1u1', 'defaultRoute': True, 'bootproto': 'none', 'dhcpv6': False, 'ipv6autoconf': False, 'ipaddr': 'IP-ADDR126', 'netmask': '255.255.255.0', 'gateway': 'IP-ADDR1', 'ipv6addr': '2001:470:df4e:2:fe4d:d4ff:fe3e:fb86/64', 'ipv6gateway': 'fe80::250:56ff:fe8b:6e21', 'switch': 'legacy', 'nameservers': ['IP-ADDR4']}) MainProcess|jsonrpc/1::INFO::2020-11-06 09:49:59,977::netconfpersistence::69::root::(setBonding) Adding bond0({'nics': ['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:49:59,979::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-23 /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe (cwd None) MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:00,355::hooks::122::root::(_runHooksDir) /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe: rc=0 err=b'' MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:00,356::configurator::195::root::(_setup_nmstate) Processing setup through nmstate MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:00,384::configurator::197::root::(_setup_nmstate) Desired state: {'interfaces': [{'name': 'bond0', 'type': 'bond', 'state': 'up', 'link-aggregation': {'slaves': ['eno1', 'enp9s0'], 'options': {'miimon': '100', 'xmit_hash_policy': '2'}, 'mode': '802.3ad'}}, {'name': 'ovirtmgmt', 'mtu': 1500}]} MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,439::checkpoint::121::root::(create) Checkpoint /org/freedesktop/NetworkManager/Checkpoint/40 created for all devices: 60 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,439::netapplier::239::root::(_add_interfaces) Adding new interfaces: ['bond0'] MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,442::netapplier::251::root::(_edit_interfaces) Editing interfaces: ['ovirtmgmt', 'eno1', 'enp9s0'] MainProcess|jsonrpc/1::WARNING::2020-11-06 09:50:00,443::ipv6::188::root::(_set_static) IPv6 link local address fe80::64c3:73ff:fe2f:10d7/64 is ignored when applying desired state MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,446::nmclient::136::root::(execute_next_action) Executing NM action: func=add_connection_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,459::connection::329::root::(_add_connection_callback) Connection adding succeeded: dev=bond0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,459::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_delete_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,472::connection::355::root::(_delete_connection_callback) Connection deletion succeeded: dev=eno1 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,472::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_delete_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,476::connection::355::root::(_delete_connection_callback) Connection deletion succeeded: dev=enp9s0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,476::nmclient::136::root::(execute_next_action) Executing NM action: func=add_connection_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,485::connection::329::root::(_add_connection_callback) Connection adding succeeded: dev=eno1 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,485::nmclient::136::root::(execute_next_action) Executing NM action: func=add_connection_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,491::connection::329::root::(_add_connection_callback) Connection adding succeeded: dev=enp9s0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,491::nmclient::136::root::(execute_next_action) Executing NM action: func=commit_changes_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,498::connection::386::root::(_commit_changes_callback) Connection update succeeded: dev=ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,499::nmclient::136::root::(execute_next_action) Executing NM action: func=safe_activate_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,508::connection::215::root::(_active_connection_callback) Connection activation initiated: dev=bond0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,548::connection::301::root::(_waitfor_active_connection_callback) Connection activation succeeded: dev=bond0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_IP_CONFIG of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_IS_MASTER | NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_MASTER_HAS_SLAVES of type NM.ActivationStateFlags> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,549::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,554::device::149::root::(_modify_callback) Device reapply succeeded: dev=ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,554::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:00,559::device::143::root::(_modify_callback) Device reapply failed on enp9s0: error=nm-device-error-quark: Can't reapply changes to 'connection.autoconnect-slaves' setting (3) Fallback to device activation MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,116::connection::215::root::(_active_connection_callback) Connection activation initiated: dev=enp9s0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,265::connection::301::root::(_waitfor_active_connection_callback) Connection activation succeeded: dev=enp9s0, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATED of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_ACTIVATED of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_IS_SLAVE | NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_IP4_READY | NM_ACTIVATION_STATE_FLAG_IP6_READY of type NM.ActivationStateFlags> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,266::nmclient::136::root::(execute_next_action) Executing NM action: func=_safe_modify_async MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,269::device::143::root::(_modify_callback) Device reapply failed on eno1: error=nm-device-error-quark: Can't reapply changes to 'connection.autoconnect-slaves' setting (3) Fallback to device activation MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,598::connection::215::root::(_active_connection_callback) Connection activation initiated: dev=eno1, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATING of type NM.ActiveConnectionState> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,946::connection::301::root::(_waitfor_active_connection_callback) Connection activation succeeded: dev=eno1, con-state=<enum NM_ACTIVE_CONNECTION_STATE_ACTIVATED of type NM.ActiveConnectionState>, dev-state=<enum NM_DEVICE_STATE_ACTIVATED of type NM.DeviceState>, state-flags=<flags NM_ACTIVATION_STATE_FLAG_IS_SLAVE | NM_ACTIVATION_STATE_FLAG_LAYER2_READY | NM_ACTIVATION_STATE_FLAG_IP4_READY | NM_ACTIVATION_STATE_FLAG_IP6_READY of type NM.ActivationStateFlags> MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,946::nmclient::139::root::(execute_next_action) NM action queue exhausted, quiting mainloop MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,985::checkpoint::156::root::(destroy) Checkpoint /org/freedesktop/NetworkManager/Checkpoint/40 destroyed MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,991::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,997::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:01,998::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:02,004::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:02,004::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev enp0s29u1u1 classid 0:1388 (cwd None) MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:02,010::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:02,075::netconfpersistence::69::root::(setBonding) Adding bond0({'nics': ['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}) MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:02,075::netconfpersistence::238::root::(_clearDisk) Clearing netconf: /var/lib/vdsm/staging/netconf MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:02,078::netconfpersistence::188::root::(save) Saved new config RunningConfig({'ovirtmgmt': {'ipv6autoconf': False, 'nic': 'eno1', 'defaultRoute': True, 'ipv6gateway': 'fe80::250:56ff:fe8b:6e21', 'mtu': 1500, 'ipv6addr': '2001:470:df4e:2:fe4d:d4ff:fe3e:fb86/64', 'switch': 'legacy', 'netmask': '255.255.255.0', 'bridged': True, 'ipaddr': 'IP-ADDR126', 'dhcpv6': False, 'gateway': 'IP-ADDR1', 'stp': False, 'bootproto': 'none', 'nameservers': ['IP-ADDR4']}}, {'bond0': {'nics': ['eno1', 'enp9s0'], 'options': 'mode=4 miimon=100 xmit_hash_policy=2', 'switch': 'legacy'}}, {}) to [/var/lib/vdsm/staging/netconf/nets,/var/lib/vdsm/staging/netconf/bonds,/var/lib/vdsm/staging/netconf/devices] MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:02,078::connectivity::47::root::(check) Checking connectivity... MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:03,081::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-23 /usr/libexec/vdsm/hooks/after_network_setup/30_ethtool_options (cwd None) MainProcess|jsonrpc/1::INFO::2020-11-06 09:50:03,255::hooks::122::root::(_runHooksDir) /usr/libexec/vdsm/hooks/after_network_setup/30_ethtool_options: rc=0 err=b'' MainProcess|jsonrpc/1::DEBUG::2020-11-06 09:50:03,255::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) return setupNetworks with None MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,419::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call network_caps with () {} MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,426::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,431::routes::115::root::(get_gateway) The gateway IP-ADDR1 is duplicated for the device ovirtmgmt MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,432::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,439::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,440::cmdutils::130::root::(exec_cmd) /sbin/tc class show dev enp0s29u1u1 classid 0:1388 (cwd None) MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,446::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,510::vsctl::74::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,510::cmdutils::130::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None) MainProcess|jsonrpc/3::DEBUG::2020-11-06 09:50:03,522::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 netlink/events::DEBUG::2020-11-06 09:50:03,529::concurrent::258::root::(run) START thread <Thread(netlink/events, started daemon 140427791193856)> (func=<bound method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object at 0x7fb7e4a12828>>, args=(), kwargs={}) MainProcess|mpathhealth::DEBUG::2020-11-06 09:50:03,560::supervdsm_server::93::SuperVdsm.ServerCallback::(wrapper) call dmsetup_run_status with ('multipath',) {} MainProcess|mpathhealth::DEBUG::2020-11-06 09:50:03,560::commands::153::common.commands::(start) /usr/bin/taskset --cpu-list 0-23 /usr/sbin/dmsetup status --target multipath (cwd None) MainProcess|mpathhealth::DEBUG::2020-11-06 09:50:03,591::commands::98::common.commands::(run) SUCCESS: <err> = b''; <rc> = 0 MainProcess|mpathhealth::DEBUG::2020-11-06 09:50:03,591::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) return dmsetup_run_status with b'ST4000NM0033-9ZM170_Z1Z8JNPX: 0 7814037168 multipath 2 0 0 0 1 1 A 0 1 2 8:16 A 0 0 1 \n' netlink/events::DEBUG::2020-11-06 09:50:05,529::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 140427791193856)> MainProcess|jsonrpc/3::ERROR::2020-11-06 09:50:05,533::supervdsm_server::97::SuperVdsm.ServerCallback::(wrapper) Error in network_caps Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 95, in wrapper res = func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 59, in network_caps return netswitch.configurator.netcaps(compatibility=30600) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 505, in netcaps _add_speed_device_info(net_caps) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 546, in _add_speed_device_info devattr['speed'] = bond.speed(devname) TypeError: 'module' object is not callable _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YD46XEE2XINQPZ...
Which version of vdsm do you use? -- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

I get this output on the node: cat /var/log/vdsm/vdsm.log | grep "I am the actual vdsm" 2020-11-05 10:32:24,495+0100 INFO (MainThread) [vds] (PID: 15547) I am the actual vdsm 4.40.26.3.1 ovirtn1.5ervers.lan (4.18.0-193.19.1.el8_2.x86_64) (vdsmd:155)

I also have this: systemctl status network.service -l ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; generated) Active: failed (Result: exit-code) since Fri 2020-11-06 11:23:24 CET; 21s ago Docs: man:systemd-sysv-generator(8) Process: 498501 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE) Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan systemd[1]: network.service: Control process exited, code=exited status=1 Nov 06 11:23:24 ovirtn1.5ervers.lan systemd[1]: network.service: Failed with result 'exit-code'. Nov 06 11:23:24 ovirtn1.5ervers.lan systemd[1]: Failed to start LSB: Bring up/down networking.

On Fri, Nov 6, 2020 at 11:34 AM Harry O <harryo.dk@gmail.com> wrote:
I also have this:
systemctl status network.service -l ● network.service - LSB: Bring up/down networking Loaded: loaded (/etc/rc.d/init.d/network; generated) Active: failed (Result: exit-code) since Fri 2020-11-06 11:23:24 CET; 21s ago Docs: man:systemd-sysv-generator(8) Process: 498501 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE)
Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan network[498501]: RTNETLINK answers: File exists Nov 06 11:23:24 ovirtn1.5ervers.lan systemd[1]: network.service: Control process exited, code=exited status=1 Nov 06 11:23:24 ovirtn1.5ervers.lan systemd[1]: network.service: Failed with result 'exit-code'. Nov 06 11:23:24 ovirtn1.5ervers.lan systemd[1]: Failed to start LSB: Bring up/down networking. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q3U3YO2QMABC7M...
This should not matter as we have been using NetworkManager since 4.4. But couple of additional questions: Did you perform an upgrade recently on this host? If so do you know from which version of vdsm? Is there by chance file speed.py in /usr/lib/python3.6/site-packages/vdsm/network/link/bond/? If not did you try to restart vdsmd and supervdsmd services and then request the capabilities again? Thanks, Ales -- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

It's a brand new cluster, I want to setup bonding and tunking for the first time one the hosts and cluster, so no upgrades done. ls /usr/lib/python3.6/site-packages/vdsm/network/link/bond/ -a . .. bond_speed.py __init__.py __pycache__ speed.py sysfs_driver.py sysfs_options_mapper.py sysfs_options.py Restart vdsmd and supervdsmd services and then request the capabilities again, didn't work

host: OS Version: RHEL - 8.2 - 2.2004.0.2.el8 OS Description: CentOS Linux 8 (Core) Kernel Version: 4.18.0 - 193.19.1.el8_2.x86_64 KVM Version: 4.2.0 - 29.el8.3 LIBVIRT Version: libvirt-6.0.0-25.2.el8 VDSM Version: vdsm-4.40.26.3-1.el8 SPICE Version: 0.14.2 - 1.el8 GlusterFS Version: glusterfs-7.8-1.el8 CEPH Version: librbd1-12.2.7-9.el8 Open vSwitch Version: [N/A] Nmstate Version: nmstate-0.2.10-1.el8 Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX: conditional cache flushes, SMT vulnerable), SRBDS: (Not affected), MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: usercopy/swapgs barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Full generic retpoline, STIBP: disabled, RSB filling), ITLB_MULTIHIT: (KVM: Mitigation: Split huge pages), TSX_ASYNC_ABORT: (Not affected), SPEC_STORE_BYPASS: (Vulnerable) VNC Encryption: Disabled FIPS mode enabled: Disabled

On Fri, Nov 6, 2020 at 12:22 PM Harry O <harryo.dk@gmail.com> wrote:
It's a brand new cluster, I want to setup bonding and tunking for the first time one the hosts and cluster, so no upgrades done.
ls /usr/lib/python3.6/site-packages/vdsm/network/link/bond/ -a . .. bond_speed.py __init__.py __pycache__ speed.py sysfs_driver.py sysfs_options_mapper.py sysfs_options.py
Restart vdsmd and supervdsmd services and then request the capabilities again, didn't work _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5446IXL3UQ4EJ7...
For some reason the speed.py was installed even though it should not be. Looking at the rpms the file is not there. Only few thing that might have caused that come to my mind. First is the upgrade which you did not perform and second is reinstall from an older version. Anyway removing speed.py from this folder and speed.cpython* from __pycache__ should deal with this. After that please restart vdsmd and supervdsmd and try to refresh caps again. Thanks, Ales -- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>
participants (2)
-
Ales Musil
-
Harry O