Issue with ovirt self hosted engine instalaltion 4.4
by hjadavalluva@ukaachen.de
Hello,
Good Day!
I'm Hariharan and I'm working as System Administrator in Linux platform for RWTH Uniklink - Germany. Currently I'm managing the Ovirt environment and I'm very new to this domain. Currently I'm facing an issue with Ovirt self hosted engine installation 4.4 in CentOS 8 server. I have an issue when mounting the glusterfs storage to a target machine. Below is the exact issue what I face;
Error: "changed": false, "msg": "Error: the target storage domain contains only 11.0GiB of available space while a minimum of 61.0GiB is required If you wish to use the current target storage domain by extending it, make sure it contains nothing before adding it.
Note: (gluster storage was installed and configured in Linux work station with adding additional hard disk in one of the two gluster storage linux work station. I used replicated volumes (replica 2) in creating gluster volumes)
Hardware detail:
device model: Dell R540
Capacity: 240 GB
I tried best possibilities to resolve this issue but still I'm facing the same issue. Could you please help me in resolving this issue with your guidance or support.
4 years
Bond creation issue via hosted engine
by Harry O
Hi guys and girls,
Every time I try to create a bond with my two onboard intel nic's (any type of bond) via hosted engine, it will fail with the following error "Error while executing action HostSetupNetworks: Unexpected exception", it will still get created on the node but then the engine will get the capabilities out of sync with the node and can't refresh. I will need to delete the bond manually on the node for the engine to be happy again. Anyone knows why?
Here is some logs from engine log file:
[root@ovirt1-engine ~]# cat /var/log/ovirt-engine/engine.log
2020-11-06 08:29:56,230+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.OvfDataUpdater] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [] Attempting to update VMs/Templates Ovf.
2020-11-06 08:29:56,236+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Before acquiring and wait lock 'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]', sharedLocks=''}'
2020-11-06 08:29:56,236+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Lock-wait acquired to object 'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]', sharedLocks=''}'
2020-11-06 08:29:56,237+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : ID: b150e472-1f45-11eb-8e70-00163e78288d Type: StoragePool
2020-11-06 08:29:56,242+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Attempting to update VM OVFs in Data Center 'Default'
2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Successfully updated VM OVFs in Data Center 'Default'
2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Attempting to update template OVFs in Data Center 'Default'
2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Successfully updated templates OVFs in Data Center 'Default'
2020-11-06 08:29:56,243+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Attempting to remove unneeded template/vm OVFs in Data Center 'Default'
2020-11-06 08:29:56,245+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Successfully removed unneeded template/vm OVFs in Data Center 'Default'
2020-11-06 08:29:56,245+01 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-85) [7094b20b] Lock freed to object 'EngineLock:{exclusiveLocks='[b150e472-1f45-11eb-8e70-00163e78288d=OVF_UPDATE]', sharedLocks=''}'
2020-11-06 08:29:56,284+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] START, GetStorageDeviceListVDSCommand(HostName = ovirtn3.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'}), log id: 59a1c9e
2020-11-06 08:29:56,284+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22514) [] START, GetStorageDeviceListVDSCommand(HostName = ovirtn2.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='a4904c7c-92d7-4e4f-adf7-755f3c17335d'}), log id: 735f6d30
2020-11-06 08:29:57,020+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}]
2020-11-06 08:29:57,020+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}]
2020-11-06 08:29:57,020+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Failed in 'GetStorageDeviceListVDS' method
2020-11-06 08:29:57,020+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}]
2020-11-06 08:29:57,057+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-22513) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirtn3.5ervers.lan command GetStorageDeviceListVDS failed: Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}
2020-11-06 08:29:57,057+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Command 'org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand' return value 'StorageDeviceListReturn:{status='Status [code=-32603, message=Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}]'}'
2020-11-06 08:29:57,057+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] HostName = ovirtn3.5ervers.lan
2020-11-06 08:29:57,057+01 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] Command 'GetStorageDeviceListVDSCommand(HostName = ovirtn3.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'})' execution failed: VDSGenericException: VDSErrorException: Failed to GetStorageDeviceListVDS, error = Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}, code = -32603
2020-11-06 08:29:57,057+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22513) [] FINISH, GetStorageDeviceListVDSCommand, return: , log id: 59a1c9e
2020-11-06 08:29:57,057+01 ERROR [org.ovirt.engine.core.bll.gluster.StorageDeviceSyncJob] (EE-ManagedThreadFactory-engine-Thread-22513) [] Exception retriving storage device from vds EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetStorageDeviceListVDS, error = Internal JSON-RPC error: {'reason': "'NoneType' object has no attribute 'iface'"}, code = -32603 (Failed with error unexpected and code 16)
2020-11-06 08:29:57,433+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GetStorageDeviceListVDSCommand] (EE-ManagedThreadFactory-engine-Thread-22514) [] FINISH, GetStorageDeviceListVDSCommand, return: [StorageDevice:{id='null', name='ST4000NM0033-9ZM170_Z1Z8JT3C', devUuid='null', fsUuid='null', vdsId='null', description='ST4000NM0033-9ZM (dm-multipath)', devType='ATA', devPath='/dev/mapper/ST4000NM0033-9ZM170_Z1Z8JT3C', fsType='null', mountPoint='null', size='3815447', canCreateBrick='true', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda', devUuid='9TjMR1-Mk1p-fS6M-w6sP-jEfx-FmPe-A2L5iO', fsUuid='null', vdsId='null', description='lvmvg', devType='ATA', devPath='/dev/gluster_vg_sda', fsType='null', mountPoint='null', size='3815444', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_lv_data', devUuid='FuEGn2-BVzA-JDyB-fZPY-BlsU-QYO4-xFVmD0', fsUuid='3d5d228e-ece8-4e59-92ae-c1eb6094e493', vdsId='null', desc
ription='lvmthinlv', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_lv_data', fsType='xfs', mountPoint='null', size='51200', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_lv_engine', devUuid='i2p1Ps-Ix6M-hmJG-VFqJ-2zVE-w14m-71j7KW', fsUuid='6280a49f-f49b-49f6-bc92-139834aae959', vdsId='null', description='lvmlv', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_lv_engine', fsType='xfs', mountPoint='null', size='102400', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_lv_vmstore', devUuid='DggrnE-C127-U0la-XoyW-eA2w-LOzk-82UmrT', fsUuid='1349ed39-21ed-418a-a2c9-26e67f403d11', vdsId='null', description='lvmthinlv', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_lv_vmstore', fsType='xfs', mountPoint='null', size='3225600', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='gluster_vg_sda-gluster_thinpool_glu
ster_vg_sda', devUuid='x0Sy1g-WRtr-ZODw-J1il-wJcv-MbKM-Hrfcr9', fsUuid='null', vdsId='null', description='lvmthinpool', devType='null', devPath='/dev/mapper/gluster_vg_sda-gluster_thinpool_gluster_vg_sda', fsType='null', mountPoint='null', size='3680660', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sda', devUuid='null', fsUuid='fgvWOq-CB7k-qIjJ-iY15-ljEK-aaFG-CvRLQW', vdsId='null', description='ST4000NM0033-9ZM (disk)', devType='ATA', devPath='/dev/sda', fsType='lvmpv', mountPoint='null', size='3815447', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sdb', devUuid='null', fsUuid='null', vdsId='null', description='ST4000NM0033-9ZM (disk)', devType='ATA', devPath='/dev/sdb', fsType='multipath_member', mountPoint='null', size='3815447', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sdc', devUuid='null', fsUuid='545ca4c8', vdsId='null', description='Ultra Fit (disk)', devType='USB',
devPath='/dev/sdc', fsType='disklabel', mountPoint='null', size='14664', canCreateBrick='false', isGlusterBrick='false'}, StorageDevice:{id='null', name='sdc1', devUuid='null', fsUuid='null', vdsId='null', description='partition', devType='USB', devPath='/dev/sdc1', fsType='zfs_member', mountPoint='null', size='14663', canCreateBrick='true', isGlusterBrick='false'}], log id: 735f6d30
2020-11-06 08:30:00,403+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetLldpVDSCommand] (default task-71) [474a11f9-ea66-4fd2-bad3-488ce157d197] START, GetLldpVDSCommand(HostName = ovirtn1.5ervers.lan, GetLldpVDSCommandParameters:{hostId='285fc148-62ed-4243-8106-ed01eff28295'}), log id: 66e6c0c1
2020-11-06 08:30:00,522+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetLldpVDSCommand] (default task-71) [474a11f9-ea66-4fd2-bad3-488ce157d197] FINISH, GetLldpVDSCommand, return: {enp0s29u1u1=LldpInfo:{enabled='false', tlvs='[]'}, eno1=LldpInfo:{enabled='false', tlvs='[]'}, enp9s0=LldpInfo:{enabled='true', tlvs='[]'}}, log id: 66e6c0c1
2020-11-06 08:30:05,665+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] START, GlusterServersListVDSCommand(HostName = ovirtn3.5ervers.lan, VdsIdVDSCommandParametersBase:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'}), log id: 7f602482
2020-11-06 08:30:05,859+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] FINISH, GlusterServersListVDSCommand, return: [192.168.4.128/24:CONNECTED, ovirtn2.5ervers.lan:CONNECTED, ovirtn1.5ervers.lan:CONNECTED], log id: 7f602482
2020-11-06 08:30:05,862+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] START, GlusterVolumesListVDSCommand(HostName = ovirtn3.5ervers.lan, GlusterVolumesListVDSParameters:{hostId='4ec53a62-5cf3-479a-baf5-44c5b7624d39'}), log id: 7f8c7e41
2020-11-06 08:30:05,968+01 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-75) [] FINISH, GlusterVolumesListVDSCommand, return: {95817a31-0623-44ed-a8bc-d2f37af7644d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ac351e, 03874a87-1e23-4e54-8445-159ba27b48fe=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@25915a3c, 90de405f-60f0-401a-8bdf-7203d8db21f3=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@56a4cd2d}, log id: 7f8c7e41
2020-11-06 08:30:07,416+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Before acquiring lock-timeout 'EngineLock:{exclusiveLocks='[HOST_NETWORK285fc148-62ed-4243-8106-ed01eff28295=HOST_NETWORK]', sharedLocks=''}'
2020-11-06 08:30:07,416+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Lock-timeout acquired to object 'EngineLock:{exclusiveLocks='[HOST_NETWORK285fc148-62ed-4243-8106-ed01eff28295=HOST_NETWORK]', sharedLocks=''}'
2020-11-06 08:30:07,464+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Running command: HostSetupNetworksCommand internal: false. Entities affected : ID: 285fc148-62ed-4243-8106-ed01eff28295 Type: VDSAction group CONFIGURE_HOST_NETWORK with role type ADMIN
2020-11-06 08:30:07,464+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Before acquiring lock in order to prevent monitoring for host 'ovirtn1.5ervers.lan' from data-center 'Default'
2020-11-06 08:30:07,464+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Lock acquired, from now a monitoring of host will be skipped for host 'ovirtn1.5ervers.lan' from data-center 'Default'
2020-11-06 08:30:07,466+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] START, HostSetupNetworksVDSCommand(HostName = ovirtn1.5ervers.lan, HostSetupNetworksVdsCommandParameters:{hostId='285fc148-62ed-4243-8106-ed01eff28295', vds='Host[ovirtn1.5ervers.lan,285fc148-62ed-4243-8106-ed01eff28295]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[]', removedNetworks='[]', bonds='[CreateOrUpdateBond:{id='null', name='bond0', bondingOptions='mode=4 miimon=100 xmit_hash_policy=2', slaves='[eno1, enp9s0]'}]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='false'}), log id: 695498c2
2020-11-06 08:30:07,466+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] FINISH, HostSetupNetworksVDSCommand, return: , log id: 695498c2
2020-11-06 08:30:12,118+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}]
2020-11-06 08:30:12,119+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Failed in 'GetCapabilitiesVDS' method
2020-11-06 08:30:12,119+01 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Unexpected return value: Status [code=-32603, message=Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}]
2020-11-06 08:30:12,164+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirtn1.5ervers.lan command GetCapabilitiesVDS failed: Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}
2020-11-06 08:30:12,164+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand' return value 'org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturn@5ccc140e'
2020-11-06 08:30:12,164+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] HostName = ovirtn1.5ervers.lan
2020-11-06 08:30:12,164+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Command 'GetCapabilitiesVDSCommand(HostName = ovirtn1.5ervers.lan, VdsIdAndVdsVDSCommandParametersBase:{hostId='285fc148-62ed-4243-8106-ed01eff28295', vds='Host[ovirtn1.5ervers.lan,285fc148-62ed-4243-8106-ed01eff28295]'})' execution failed: VDSGenericException: VDSErrorException: Failed to GetCapabilitiesVDS, error = Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}, code = -32603
2020-11-06 08:30:12,165+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Host setup networks finished. Lock released. Monitoring can run now for host 'ovirtn1.5ervers.lan' from data-center 'Default'
2020-11-06 08:30:12,165+01 ERROR [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Command 'org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetCapabilitiesVDS, error = Internal JSON-RPC error: {'reason': "Attempt to call function: <bound method Global.getCapabilities of <vdsm.API.Global object at 0x7faebc45cfd0>> with arguments: () error: 'module' object is not callable"}, code = -32603 (Failed with error unexpected and code 16)
2020-11-06 08:30:12,222+01 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-71) [335cc174-4ec6-4f32-abf5-00b6ab8a15f4] Lock freed to object 'EngineLock:{exclusiveLocks='[HOST_NETWORK285fc148-62ed-4243-8106-ed01eff28295=HOST_NETWORK]', sharedLocks=''}'
4 years
Issue in importing ISO storage domain in Ovirt 4.3
by hjadavalluva@ukaachen.de
Hello,
I'm managing Ovirt environment 4.3 and I'm very new to this area. The setup was already done by former colleague and right now I'm managing this environment. Recently, one VM's OS got corrupted and I have to re-install the OS in that VM to bring the VM up and running. For doing this, I need to import the ISO domain from gluster storage (this ISO domain is not present in ovirt engine now). While importing the ISO domain, I face the following error;
"Error while executing the action: Cannot add memory connection. Memory connection 63566129-bb20-4b11-b5c2-5e943ab94bed already exists"
I already searched this "63566129-bb20-4b11-b5c2-5e943ab94bed" image in ovirt server, target server and gluster storage server but I couldn't find it. Could you please help me in resolving this issue? I got stuck badly here.
Thanks in advance!
Best Regards,
Hariharan
4 years
is there a way to run the cockpit playbooks on the cli ?
by Rob Verduijn
Hi,
For some reason my browser and network like to produce hiccups during the
running of the hyperconverge script and the hosted engine install script. (
My prime suspect is Murphy )
This is a default feature in all browsers on any platform.
This is why I try to avoid the use of cockpit for complex matters.
( ie anything more complex than proces listing or the viewing of logs )
On the cockpit host, from what directory with which options and what
playbook do I need to use to get the same result.
Rob
4 years
Fence Agent in Virtual Environment
by jb
Hello,
I would like to build a hyperconverged gluster with hosted engine in a
virtual environment, on Fedora 33 with KVM.
The setup is for testing purposes, specially for test upgrades before
running them on the real physical Servers. But I want to have the setup
as close as possible to the real environment, so the only thing is
missing is a fence agent.
Is there a way to simulate power management in a virtual environment?
Jonathan
4 years
Re: Fence Agent in Virtual Environment
by Jonathan Baecker
Am 05.11.20 um 19:19 schrieb Strahil Nikolov:
> You need to enable HA for the VM.
Yes this I know and I had this on. When I set the host in maintenance
mode, the VM was moving to another host, but not when I kill the host.
> About the XVM , I think that you first need to install it on all hosts and then check in UI, if you can find that fence agent in "Power Management".
Thanks, I will try this!
Best Regard
Jonathan
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В четвъртък, 5 ноември 2020 г., 18:41:40 Гринуич+2, jb <jonbae77(a)gmail.com> написа:
>
>
>
>
>
> Yes I know, is just a guest... I wanted to test what is happen with a
> VM, when I kill it's host. After that, I have not seen that the VM is
> moving to another host.
>
> So I thought maybe ovirt needs the power management for that.
>
> I have read about fence_xvm, but I don't know how to configure oVirt
> with that.
>
>
>
> Am 05.11.20 um 17:11 schrieb Strahil Nikolov:
>> This is just a guess , but you might be able to install fence_xvm on all Virtualized Hosts .
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В четвъртък, 5 ноември 2020 г., 16:00:40 Гринуич+2, jb <jonbae77(a)gmail.com> написа:
>>
>>
>>
>>
>>
>> Hello,
>>
>> I would like to build a hyperconverged gluster with hosted engine in a
>> virtual environment, on Fedora 33 with KVM.
>>
>> The setup is for testing purposes, specially for test upgrades before
>> running them on the real physical Servers. But I want to have the setup
>> as close as possible to the real environment, so the only thing is
>> missing is a fence agent.
>>
>> Is there a way to simulate power management in a virtual environment?
>>
>>
>> Jonathan
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/V5SHGKNLTK2...
4 years, 1 month
configuration of hyperconverged setup with ssd cache for slow drives
by Rob Verduijn
Hello,
After a serious struggle I finally managed to get ovirt-hosted engine with
the hyperconverged setup to work.
However I did not manage to get the ssd cache option for my slow sata
drives to work.
The installation script complained about mismatching block sizes.
Is there a way to make the cockpit hyperconverged installation handle this ?
Rob
fdisk -l /dev/sda
Disk /dev/sda: 238.5 GiB, 256060514304 bytes, 500118192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
fdisk -l /dev/mapper/gluster_vg_sdc-gluster_lv_data
Disk /dev/mapper/gluster_vg_sdc-gluster_lv_data: 4.9 TiB, 5368709120000
bytes, 1310720000 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 1310720 bytes / 2621440 bytes
4 years, 1 month
[ANN] oVirt 4.4.3 Eighth Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.3 Eighth Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.3
Eighth Release Candidate for testing, as of November 5th, 2020.
This update is the third in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA should not require re-doing these steps, if
already performed while upgrading from 4.4.1 to 4.4.2 GA. These are only
required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.3 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.3 (redeploy in case of already being on 4.4.3).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
You can hit:
Problem: cannot install the best update candidate for package
ovirt-engine-metrics-1.4.1.1-1.el8.noarch
- nothing provides rhel-system-roles >= 1.0-19 needed by
ovirt-engine-metrics-1.4.2-1.el8.noarch
in order to get rhel-system-roles >= 1.0-19 you need
https://buildlogs.centos.org/centos/8/virt/x86_64/ovirt-44/ repo since that
package can be promoted to release only at 4.4.3 GA.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.3 release highlights:
http://www.ovirt.org/release/4.4.3/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.3/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 1 month
回复:[ovirt-devel] Re: Host's status how to change to UP when I execute Management--> Active command
by 李伏琼
thanks very much
--------------原始邮件--------------
发件人:"Liran Rotenberg "<lrotenbe(a)redhat.com>;
发送时间:2020年11月5日(星期四) 晚上11:52
收件人:"lifuqiong(a)sunyainfo.com" <lifuqiong(a)sunyainfo.com>;
抄送:"users "<users(a)ovirt.org>;"devel "<devel(a)ovirt.org>;
主题:[ovirt-devel] Re: Host's status how to change to UP when I execute Management--> Active command
-----------------------------------
Hi,The engine will monitor that host. Once it's reachable and we can get information from it the status will change accordingly.
Please check out the HostMonitoring class.
On Thu, Nov 5, 2020 at 11:57 AM lifuqiong(a)sunyainfo.com <lifuqiong(a)sunyainfo.com> wrote:
Hi,
I checkout ovirt-engine's code with branch 4.2; When execute Host's Management --> Active command, ovirt engine just set vds's status to "Unassigned" but how the vds's status changed to "UP" in ovirt-engine? The code in ActiveVdsCommand.executeCommand() as follows:
protected void executeCommand() {
final VDS vds = getVds();
try (EngineLock monitoringLock = acquireMonitorLock("Activate host")) {
executionHandler.updateSpecificActionJobCompleted(vds.getId(), ActionType.MaintenanceVds, false);
setSucceeded(setVdsStatus(VDSStatus.Unassigned).getSucceeded());
if (getSucceeded()) {
TransactionSupport.executeInNewTransaction(() -> {
// set network to operational / non-operational
List<Network> networks = networkDao.getAllForCluster(vds.getClusterId());
networkClusterHelper.setStatus(vds.getClusterId(), networks);
return null;
});
// Start glusterd service on the node, which would haven been stopped due to maintenance
if (vds.getClusterSupportsGlusterService()) {
runVdsCommand(VDSCommandType.ManageGlusterService,
new GlusterServiceVDSParameters(vds.getId(), Arrays.asList("glusterd"), "restart"));
// starting vdo service
GlusterStatus isRunning = glusterUtil.isVDORunning(vds.getId());
switch (isRunning) {
case DOWN:
log.info("VDO service is down in host : '{}' with id '{}', starting VDO service",
vds.getHostName(),
vds.getId());
startVDOService(vds);
break;
case UP:
log.info("VDO service is up in host : '{}' with id '{}', skipping starting of VDO service",
vds.getHostName(),
vds.getId());
break;
case UNKNOWN:
log.info("VDO service is not installed host : '{}' with id '{}', ignoring to start VDO service",
vds.getHostName(),
vds.getId());
break;
}
}
}
}
}
Your Sincerely
Mark Lee
_______________________________________________
Devel mailing list -- devel(a)ovirt.org
To unsubscribe send an email to devel-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/message/36745S77NLH...
4 years, 1 month
Re: [ovirt-devel] Host's status how to change to UP when I execute Management--> Active command
by Liran Rotenberg
Hi,
The engine will monitor that host. Once it's reachable and we can get
information from it the status will change accordingly.
Please check out the HostMonitoring class.
On Thu, Nov 5, 2020 at 11:57 AM lifuqiong(a)sunyainfo.com <
lifuqiong(a)sunyainfo.com> wrote:
>
> Hi,
> I checkout ovirt-engine's code with branch 4.2; When execute Host's
> Management --> Active command, ovirt engine just set vds's status to
> "Unassigned" but how the vds's status changed to "UP" in ovirt-engine? The
> code in ActiveVdsCommand.executeCommand() as follows:
> protected void executeCommand() {
>
> final VDS vds = getVds();
> try (EngineLock monitoringLock = acquireMonitorLock("Activate
> host")) {
> executionHandler.updateSpecificActionJobCompleted(vds.getId(),
> ActionType.MaintenanceVds, false);
>
> setSucceeded(setVdsStatus(VDSStatus.Unassigned).getSucceeded());
>
> if (getSucceeded()) {
> TransactionSupport.executeInNewTransaction(() -> {
> // set network to operational / non-operational
> List<Network> networks =
> networkDao.getAllForCluster(vds.getClusterId());
> networkClusterHelper.setStatus(vds.getClusterId(),
> networks);
> return null;
> });
>
> // Start glusterd service on the node, which would haven
> been stopped due to maintenance
> if (vds.getClusterSupportsGlusterService()) {
> runVdsCommand(VDSCommandType.ManageGlusterService,
> new GlusterServiceVDSParameters(vds.getId(),
> Arrays.asList("glusterd"), "restart"));
> // starting vdo service
> GlusterStatus isRunning =
> glusterUtil.isVDORunning(vds.getId());
> switch (isRunning) {
> case DOWN:
> log.info("VDO service is down in host : '{}' with
> id '{}', starting VDO service",
> vds.getHostName(),
> vds.getId());
> startVDOService(vds);
> break;
> case UP:
> log.info("VDO service is up in host : '{}' with
> id '{}', skipping starting of VDO service",
> vds.getHostName(),
> vds.getId());
> break;
> case UNKNOWN:
> log.info("VDO service is not installed host :
> '{}' with id '{}', ignoring to start VDO service",
> vds.getHostName(),
> vds.getId());
> break;
> }
>
> }
> }
> }
> }
>
>
> Your Sincerely
> Mark Lee
> _______________________________________________
> Devel mailing list -- devel(a)ovirt.org
> To unsubscribe send an email to devel-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/36745S77NLH...
>
4 years, 1 month