Users
Threads by month
- ----- 2025 -----
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
April 2022
- 92 participants
- 140 discussions
I currently have two clusters up and running under one engine. An old
cluster on 4.3, and a new cluster on 4.4. In addition to migrating from
4.3 to 4.4, we are also migrating from glusterfs to cephfs mounted as
POSIX storage (not cinderlib, though we may make that conversion after
moving to 4.4). I have run into a strange issue, though.
On the 4.3 cluster, migration works fine with any storage backend. On
4.4, migration works against gluster or NFS, but fails when the VM is
hosted on POSIX cephfs. Both hosts are running CentOS 8.4 and were fully
updated to oVirt 4.4.7 today, as well as fully updating the engine (all
rebooted before this test, as well).
It appears that the VM fails to start on the new host, but it's not
obvious why from the logs. Can anyone shed some light or suggest further
debugging?
Related engine log:
2021-08-03 07:11:51,609-07 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] Lock Acquired to object 'EngineLock:{exclusiveLocks='[1fd47e75-d708-43e4-ac0f-67bd28dceefd=VM]', sharedLocks=''}'
2021-08-03 07:11:51,679-07 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: 1fd47e75-d708-43e4-ac0f-67bd28dceefd Type: VMAction group MIGRATE_VM with role type USER
2021-08-03 07:11:51,738-07 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='6ec548e6-9a2a-4885-81da-74d0935b7ba5', vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd', srcHost='ovirt_host1', dstVdsId='6c31c294-477d-4fa8-b6ff-12e189918f69', dstHost='ovirt_host2:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='2500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {li
mit=-1, action={name=abort, params=[]}}]]', dstQemu='10.1.88.85'}), log id: 67f63342
2021-08-03 07:11:51,739-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] START, MigrateBrokerVDSCommand(HostName = ovirt_host1, MigrateVDSCommandParameters:{hostId='6ec548e6-9a2a-4885-81da-74d0935b7ba5', vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd', srcHost='ovirt_host1', dstVdsId='6c31c294-477d-4fa8-b6ff-12e189918f69', dstHost='ovirt_host2:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='2500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.1.88.85'}), log id: 37ab0828
2021-08-03 07:11:51,741-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] FINISH, MigrateBrokerVDSCommand, return: , log id: 37ab0828
2021-08-03 07:11:51,743-07 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 67f63342
2021-08-03 07:11:51,750-07 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-34) [1b2d4416-30f0-452d-b689-291f3b7f7482] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: my_vm_hostname, Source: ovirt_host1, Destination: ovirt_host2, User: ebyrne@FreeIPA).
2021-08-03 07:11:55,736-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [28d98b26] VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd' was reported as Down on VDS '6c31c294-477d-4fa8-b6ff-12e189918f69'(ovirt_host2)
2021-08-03 07:11:55,736-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [28d98b26] VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd'(my_vm_hostname) was unexpectedly detected as 'Down' on VDS '6c31c294-477d-4fa8-b6ff-12e189918f69'(ovirt_host2) (expected on '6ec548e6-9a2a-4885-81da-74d0935b7ba5')
2021-08-03 07:11:55,736-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-21) [28d98b26] START, DestroyVDSCommand(HostName = ovirt_host2, DestroyVmVDSCommandParameters:{hostId='6c31c294-477d-4fa8-b6ff-12e189918f69', vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 110ec6aa
2021-08-03 07:11:55,911-07 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-21) [28d98b26] FINISH, DestroyVDSCommand, return: , log id: 110ec6aa
2021-08-03 07:11:55,911-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [28d98b26] VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd'(my_vm_hostname) was unexpectedly detected as 'Down' on VDS '6c31c294-477d-4fa8-b6ff-12e189918f69'(ovirt_host2) (expected on '6ec548e6-9a2a-4885-81da-74d0935b7ba5')
2021-08-03 07:11:55,911-07 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [28d98b26] Migration of VM 'my_vm_hostname' to host 'ovirt_host2' failed: VM destroyed during the startup.
2021-08-03 07:11:55,913-07 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-15) [28d98b26] VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd'(my_vm_hostname) moved from 'MigratingFrom' --> 'Paused'
2021-08-03 07:11:55,933-07 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-15) [28d98b26] EVENT_ID: VM_PAUSED(1,025), VM my_vm_hostname has been paused.
2021-08-03 07:11:55,940-07 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-15) [28d98b26] EVENT_ID: VM_PAUSED_ERROR(139), VM my_vm_hostname has been paused due to unknown storage error.
2021-08-03 07:11:55,946-07 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-15) [28d98b26] Lock freed to object 'EngineLock:{exclusiveLocks='[1fd47e75-d708-43e4-ac0f-67bd28dceefd=VM]', sharedLocks=''}'
Log from the ovirt host being migrated to:
2021-08-03 07:11:51,744-0700 INFO (Reactor thread) [ProtocolDetector.AcceptorImpl] Accepted connection from ::ffff:10.1.2.199:57742 (protocoldetector:61)
2021-08-03 07:11:51,749-0700 WARN (Reactor thread) [vds.dispatcher] unhandled write event (betterAsyncore:184)
2021-08-03 07:11:51,749-0700 INFO (Reactor thread) [ProtocolDetector.Detector] Detected protocol stomp from ::ffff:10.1.2.199:57742 (protocoldetector:125)
2021-08-03 07:11:51,749-0700 INFO (Reactor thread) [Broker.StompAdapter] Processing CONNECT request (stompserver:95)
2021-08-03 07:11:51,750-0700 INFO (JsonRpc (StompReactor)) [Broker.StompAdapter] Subscribe command received (stompserver:124)
2021-08-03 07:11:51,791-0700 WARN (jsonrpc/7) [root] ping was deprecated in favor of ping2 and confirmConnectivity (API:1372)
2021-08-03 07:11:51,879-0700 INFO (jsonrpc/0) [api.virt] START migrationCreate(params={'_srcDomXML': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:guestAgentAPIVersion>\n <ovirt-vm:jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:min
GuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-00163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:vo
lumeID>\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-
a30e-53511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.2105.el8</entry>\n <entry name=\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry name=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n
</system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot
>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' r
elabel=\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multif
unction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n <alias name=\'pci.4\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n
<model name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/
>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n <alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' functi
on=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/>\n <alias name=\'pci.14\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>
\n <model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <alias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\
'0x03\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n
<alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\n <alias name=\'input0\'/>\n <address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\'
bus=\'ps2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199\' keymap=\'en-us\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\'
vram=\'32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'selinux\' relabel=\'yes\'>\n <label>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'da
c\' relabel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n', 'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd', 'xml': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:g
uestAgentAPIVersion>\n <ovirt-vm:jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n
<ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-00163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.8
8.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.
2105.el8</entry>\n <entry name=\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry name=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n </system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis
=\'utc\'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center
/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' relabel=\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>
\n <model name=\'pcie-root-port\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n
<alias name=\'pci.4\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' functi
on=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pc
ie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n <alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/
>\n <alias name=\'pci.14\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'
/>\n </controller>\n <controller type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <alias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x03\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n
<source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n <alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\
n <alias name=\'input0\'/>\n <address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\' bus=\'ps2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199
\' keymap=\'en-us\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'se
linux\' relabel=\'yes\'>\n <label>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'dac\' relabel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n', 'elapsedTimeOffset': 89141.10990166664, 'enableGuestEvents': True, 'migrationDest': 'libvirt'}, incomingLimit=2) from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:48)
2021-08-03 07:11:51,879-0700 INFO (jsonrpc/0) [api.virt] START create(vmParams={'_srcDomXML': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:guestAgentAPIVersion>\n <ovirt-vm:jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:minGuarant
eedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-00163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>
\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53
511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.2105.el8</entry>\n <entry name=\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry name=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n </syste
m>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restar
t</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' relabel=
\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multifunction
=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n <alias name=\'pci.4\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n <mode
l name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/>\n
<address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n <alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x
2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/>\n <alias name=\'pci.14\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>\n
<model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <alias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x03\'
slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n <alias
name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\n <alias name=\'input0\'/>\n <address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\' bus=\'p
s2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199\' keymap=\'en-us\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'
32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'selinux\' relabel=\'yes\'>\n <label>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'dac\' rel
abel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n', 'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd', 'xml': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:guestAge
ntAPIVersion>\n <ovirt-vm:jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt
-vm:managed type="bool">False</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-00163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10
.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.2105.el
8</entry>\n <entry name=\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry name=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n </system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\
'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center/mnt/10
.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' relabel=\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>\n
<model name=\'pcie-root-port\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n <a
lias name=\'pci.4\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x
6\'/>\n </controller>\n <controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pcie-root
-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n <alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/>\n
<alias name=\'pci.14\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'/>\n
</controller>\n <controller type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <alias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x03\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <so
urce mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n <alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\n
<alias name=\'input0\'/>\n <address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\' bus=\'ps2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199\' keym
ap=\'en-us\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'selinux\'
relabel=\'yes\'>\n <label>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'dac\' relabel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n', 'elapsedTimeOffset': 89141.10990166664, 'enableGuestEvents': True, 'migrationDest': 'libvirt'}) from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:48)
2021-08-03 07:11:51,883-0700 INFO (jsonrpc/0) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') using a computed convergence schedule for a legacy migration: {'init': [{'params': ['101'], 'name': 'setDowntime'}], 'stalling': [{'action': {'params': ['104'], 'name': 'setDowntime'}, 'limit': 1}, {'action': {'params': ['120'], 'name': 'setDowntime'}, 'limit': 2}, {'action': {'params': ['189'], 'name': 'setDowntime'}, 'limit': 3}, {'action': {'params': ['500'], 'name': 'setDowntime'}, 'limit': 4}, {'action': {'params': ['500'], 'name': 'setDowntime'}, 'limit': 42}, {'action': {'params': [], 'name': 'abort'}, 'limit': -1}]} (migration:161)
2021-08-03 07:11:51,884-0700 INFO (vm/1fd47e75) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') VM wrapper has started (vm:2832)
2021-08-03 07:11:51,884-0700 INFO (jsonrpc/0) [api.virt] FINISH create return={'vmList': {'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd', 'status': 'Migration Destination', 'statusTime': '2158818086', 'xml': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:guestAgentAPIVersion>\n <ovirt-vm:jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:lau
nchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-0
0163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6
f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.2105.el8</entry>\n <entry name=\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry n
ame=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n </system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\'>\n <timer name=\'rtc\' tickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>
\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4c
ce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' relabel=\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'
pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n <alias name=\'pci.4\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' func
tion=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n
<model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n
<alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/>\n <alias name=\'pci.14\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\'
function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <al
ias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x03\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-
guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n <alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\n <alias name=\'input0\'/>\n <address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <inp
ut type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\' bus=\'ps2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199\' keymap=\'en-us\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2
.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'selinux\' relabel=\'yes\'>\n <label>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system
_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'dac\' relabel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n'}, 'status': {'code': 0, 'message': 'Done'}} from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:54)
2021-08-03 07:11:51,885-0700 INFO (vm/1fd47e75) [vdsm.api] START getVolumeSize(sdUUID='e8ec5645-fc1b-4d64-a145-44aa8ac5ef48', spUUID='2948c860-9bdf-11e8-a6b3-00163e0419f0', imgUUID='eb15970b-7b94-4cce-ab44-50f57850aa7f', volUUID='6f82b02d-8c22-4d50-a30e-53511776354c', options=None) from=internal, task_id=1c8d4900-c5c9-44d8-aeac-d11749b2fcae (api:48)
2021-08-03 07:11:51,890-0700 INFO (vm/1fd47e75) [vdsm.api] FINISH getVolumeSize return={'apparentsize': '52031193088', 'truesize': '52031193088'} from=internal, task_id=1c8d4900-c5c9-44d8-aeac-d11749b2fcae (api:54)
2021-08-03 07:11:51,890-0700 INFO (vm/1fd47e75) [vds] prepared volume path: (clientIF:518)
2021-08-03 07:11:51,890-0700 INFO (vm/1fd47e75) [vdsm.api] START prepareImage(sdUUID='e8ec5645-fc1b-4d64-a145-44aa8ac5ef48', spUUID='2948c860-9bdf-11e8-a6b3-00163e0419f0', imgUUID='eb15970b-7b94-4cce-ab44-50f57850aa7f', leafUUID='6f82b02d-8c22-4d50-a30e-53511776354c', allowIllegal=False) from=internal, task_id=52b45a2f-1664-4b27-931c-4e4b81d39389 (api:48)
2021-08-03 07:11:51,923-0700 INFO (vm/1fd47e75) [storage.StorageDomain] Fixing permissions on /rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c (fileSD:624)
2021-08-03 07:11:51,924-0700 INFO (vm/1fd47e75) [storage.StorageDomain] Creating domain run directory '/run/vdsm/storage/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48' (fileSD:578)
2021-08-03 07:11:51,924-0700 INFO (vm/1fd47e75) [storage.fileUtils] Creating directory: /run/vdsm/storage/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48 mode: None (fileUtils:201)
2021-08-03 07:11:51,924-0700 INFO (vm/1fd47e75) [storage.StorageDomain] Creating symlink from /rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f to /run/vdsm/storage/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/eb15970b-7b94-4cce-ab44-50f57850aa7f (fileSD:581)
2021-08-03 07:11:51,939-0700 INFO (vm/1fd47e75) [vdsm.api] FINISH prepareImage return={'path': '/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c', 'info': {'type': 'file', 'path': '/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c'}, 'imgVolumesInfo': [{'domainID': 'e8ec5645-fc1b-4d64-a145-44aa8ac5ef48', 'imageID': 'eb15970b-7b94-4cce-ab44-50f57850aa7f', 'volumeID': '6f82b02d-8c22-4d50-a30e-53511776354c', 'path': '/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c', 'leasePath': '/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b
-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease', 'leaseOffset': 0}]} from=internal, task_id=52b45a2f-1664-4b27-931c-4e4b81d39389 (api:54)
2021-08-03 07:11:51,939-0700 INFO (vm/1fd47e75) [vds] prepared volume path: /rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c (clientIF:518)
2021-08-03 07:11:51,940-0700 INFO (vm/1fd47e75) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Enabling drive monitoring (drivemonitor:59)
2021-08-03 07:11:51,942-0700 WARN (vm/1fd47e75) [root] Attempting to add an existing net user: ovirtmgmt/1fd47e75-d708-43e4-ac0f-67bd28dceefd (libvirtnetwork:192)
2021-08-03 07:11:52,018-0700 INFO (jsonrpc/0) [root] /usr/libexec/vdsm/hooks/before_vm_migrate_destination/50_vhostmd: rc=0 err=b'' (hooks:122)
2021-08-03 07:11:52,018-0700 INFO (jsonrpc/0) [api.virt] FINISH migrationCreate return={'status': {'code': 0, 'message': 'Done'}, 'migrationPort': 0, 'params': {'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd', 'status': 'Migration Destination', 'statusTime': '2158818086', 'xml': '<domain type=\'kvm\' id=\'35\' xmlns:qemu=\'http://libvirt.org/schemas/domain/qemu/1.0\'>\n <name>my_vm_hostname</name>\n <uuid>1fd47e75-d708-43e4-ac0f-67bd28dceefd</uuid>\n <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ns1="http://ovirt.org/vm/1.0">\n <ns0:qos/>\n <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">\n <ovirt-vm:balloonTarget type="int">8388608</ovirt-vm:balloonTarget>\n <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>\n <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>\n <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>\n <ovirt-vm:guestAgentAPIVersion type="int">3</ovirt-vm:guestAgentAPIVersion>\n <ovirt-vm:
jobs>{}</ovirt-vm:jobs>\n <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>\n <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>\n <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>\n <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>\n <ovirt-vm:startTime type="float">1627910770.633493</ovirt-vm:startTime>\n <ovirt-vm:device alias="ua-827b6719-b206-46fe-b214-f6e15649abad" mac_address="00:1a:4a:16:01:35">\n <ovirt-vm:network>UserDev</ovirt-vm:network>\n <ovirt-vm:custom>\n <ovirt-vm:queues>4</ovirt-vm:queues>\n </ovirt-vm:custom>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="vda">\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:guestName>/dev/vda</ovirt-vm:guestName>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:managed type="bool">False
</ovirt-vm:managed>\n <ovirt-vm:poolID>2948c860-9bdf-11e8-a6b3-00163e0419f0</ovirt-vm:poolID>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n <ovirt-vm:specParams>\n <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread>\n </ovirt-vm:specParams>\n <ovirt-vm:volumeChain>\n <ovirt-vm:volumeChainNode>\n <ovirt-vm:domainID>e8ec5645-fc1b-4d64-a145-44aa8ac5ef48</ovirt-vm:domainID>\n <ovirt-vm:imageID>eb15970b-7b94-4cce-ab44-50f57850aa7f</ovirt-vm:imageID>\n <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>\n <ovirt-vm:leasePath>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c.lease</ovirt-vm:leasePath>\n <ovirt-vm:path>/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.77:_vmstore/e8ec5645-fc
1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:path>\n <ovirt-vm:volumeID>6f82b02d-8c22-4d50-a30e-53511776354c</ovirt-vm:volumeID>\n </ovirt-vm:volumeChainNode>\n </ovirt-vm:volumeChain>\n </ovirt-vm:device>\n <ovirt-vm:device devtype="disk" name="sdc">\n <ovirt-vm:managed type="bool">False</ovirt-vm:managed>\n </ovirt-vm:device>\n</ovirt-vm:vm>\n </metadata>\n <maxMemory slots=\'16\' unit=\'KiB\'>16777216</maxMemory>\n <memory unit=\'KiB\'>8388608</memory>\n <currentMemory unit=\'KiB\'>8388608</currentMemory>\n <vcpu placement=\'static\' current=\'4\'>16</vcpu>\n <iothreads>1</iothreads>\n <resource>\n <partition>/machine</partition>\n </resource>\n <sysinfo type=\'smbios\'>\n <system>\n <entry name=\'manufacturer\'>oVirt</entry>\n <entry name=\'product\'>RHEL</entry>\n <entry name=\'version\'>8.4-1.2105.el8</entry>\n <entry name=
\'serial\'>ce917200-c887-11ea-8000-3cecef30037a</entry>\n <entry name=\'uuid\'>1fd47e75-d708-43e4-ac0f-67bd28dceefd</entry>\n <entry name=\'family\'>oVirt</entry>\n </system>\n </sysinfo>\n <os>\n <type arch=\'x86_64\' machine=\'pc-q35-rhel8.4.0\'>hvm</type>\n <smbios mode=\'sysinfo\'/>\n </os>\n <features>\n <acpi/>\n </features>\n <cpu mode=\'custom\' match=\'exact\' check=\'full\'>\n <model fallback=\'forbid\'>Skylake-Server-noTSX-IBRS</model>\n <topology sockets=\'16\' dies=\'1\' cores=\'1\' threads=\'1\'/>\n <feature policy=\'require\' name=\'ssbd\'/>\n <feature policy=\'require\' name=\'md-clear\'/>\n <feature policy=\'disable\' name=\'mpx\'/>\n <feature policy=\'require\' name=\'hypervisor\'/>\n <feature policy=\'require\' name=\'pku\'/>\n <numa>\n <cell id=\'0\' cpus=\'0-15\' memory=\'8388608\' unit=\'KiB\'/>\n </numa>\n </cpu>\n <clock offset=\'variable\' adjustment=\'0\' basis=\'utc\'>\n <timer name=\'rtc\' t
ickpolicy=\'catchup\'/>\n <timer name=\'pit\' tickpolicy=\'delay\'/>\n <timer name=\'hpet\' present=\'no\'/>\n </clock>\n <on_poweroff>destroy</on_poweroff>\n <on_reboot>restart</on_reboot>\n <on_crash>destroy</on_crash>\n <pm>\n <suspend-to-mem enabled=\'no\'/>\n <suspend-to-disk enabled=\'no\'/>\n </pm>\n <devices>\n <emulator>/usr/libexec/qemu-kvm</emulator>\n <disk type=\'file\' device=\'cdrom\'>\n <driver name=\'qemu\' error_policy=\'report\'/>\n <source startupPolicy=\'optional\'/>\n <target dev=\'sdc\' bus=\'sata\'/>\n <readonly/>\n <alias name=\'ua-bc49b2e3-3b67-4ceb-817d-5cbc2e433ad4\'/>\n <address type=\'drive\' controller=\'0\' bus=\'0\' target=\'0\' unit=\'2\'/>\n </disk>\n <disk type=\'file\' device=\'disk\' snapshot=\'no\'>\n <driver name=\'qemu\' type=\'qcow2\' cache=\'none\' error_policy=\'stop\' io=\'threads\' iothread=\'1\'/>\n <source file=\'/rhev/data-center/mnt/10.1.88.75,10.1.88.76,10.1.88.7
7:_vmstore/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/images/eb15970b-7b94-4cce-ab44-50f57850aa7f/6f82b02d-8c22-4d50-a30e-53511776354c\' index=\'5\'>\n <seclabel model=\'dac\' relabel=\'no\'/>\n </source>\n <backingStore/>\n <target dev=\'vda\' bus=\'virtio\'/>\n <serial>eb15970b-7b94-4cce-ab44-50f57850aa7f</serial>\n <boot order=\'1\'/>\n <alias name=\'ua-eb15970b-7b94-4cce-ab44-50f57850aa7f\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x05\' slot=\'0x00\' function=\'0x0\'/>\n </disk>\n <controller type=\'virtio-serial\' index=\'0\' ports=\'16\'>\n <alias name=\'ua-02742c22-2ab3-475a-8db9-26c05cb93195\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x02\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'pci\' index=\'0\' model=\'pcie-root\'>\n <alias name=\'pcie.0\'/>\n </controller>\n <controller type=\'pci\' index=\'1\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port
\'/>\n <target chassis=\'1\' port=\'0x10\'/>\n <alias name=\'pci.1\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'2\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'2\' port=\'0x11\'/>\n <alias name=\'pci.2\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'3\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'3\' port=\'0x12\'/>\n <alias name=\'pci.3\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'4\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'4\' port=\'0x13\'/>\n <alias name=\'pci.4\'/>\n
<address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'5\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'5\' port=\'0x14\'/>\n <alias name=\'pci.5\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'6\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'6\' port=\'0x15\'/>\n <alias name=\'pci.6\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'7\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'7\' port=\'0x16\'/>\n <alias name=\'pci.7\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x6\'/>\n </controller>\n
<controller type=\'pci\' index=\'8\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'8\' port=\'0x17\'/>\n <alias name=\'pci.8\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x02\' function=\'0x7\'/>\n </controller>\n <controller type=\'pci\' index=\'9\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'9\' port=\'0x18\'/>\n <alias name=\'pci.9\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x0\' multifunction=\'on\'/>\n </controller>\n <controller type=\'pci\' index=\'10\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'10\' port=\'0x19\'/>\n <alias name=\'pci.10\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x1\'/>\n </controller>\n <controller type=\'pci\' index=\'11\' model=\'pcie-root-port\'>\n <model name=\
'pcie-root-port\'/>\n <target chassis=\'11\' port=\'0x1a\'/>\n <alias name=\'pci.11\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x2\'/>\n </controller>\n <controller type=\'pci\' index=\'12\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'12\' port=\'0x1b\'/>\n <alias name=\'pci.12\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x3\'/>\n </controller>\n <controller type=\'pci\' index=\'13\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'13\' port=\'0x1c\'/>\n <alias name=\'pci.13\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x4\'/>\n </controller>\n <controller type=\'pci\' index=\'14\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'14\' port=\'0x1d\'/>\n <alias name=\'pci.14\'/>\n
<address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x5\'/>\n </controller>\n <controller type=\'pci\' index=\'15\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'15\' port=\'0x1e\'/>\n <alias name=\'pci.15\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x6\'/>\n </controller>\n <controller type=\'pci\' index=\'16\' model=\'pcie-root-port\'>\n <model name=\'pcie-root-port\'/>\n <target chassis=\'16\' port=\'0x1f\'/>\n <alias name=\'pci.16\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x03\' function=\'0x7\'/>\n </controller>\n <controller type=\'scsi\' index=\'0\' model=\'virtio-scsi\'>\n <driver iothread=\'1\'/>\n <alias name=\'ua-8f6fa789-f148-4383-b377-4111ce6f4cfe\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x04\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controll
er type=\'usb\' index=\'0\' model=\'qemu-xhci\' ports=\'8\'>\n <alias name=\'ua-b67dee00-f4ea-40ad-9a82-df9e7506c247\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x03\' slot=\'0x00\' function=\'0x0\'/>\n </controller>\n <controller type=\'sata\' index=\'0\'>\n <alias name=\'ide\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x1f\' function=\'0x2\'/>\n </controller>\n <interface type=\'bridge\'>\n <mac address=\'00:1a:4a:16:01:35\'/>\n <source bridge=\'UserDev\'/>\n <target dev=\'vnet36\'/>\n <model type=\'virtio\'/>\n <driver name=\'vhost\' queues=\'4\'/>\n <filterref filter=\'vdsm-no-mac-spoofing\'/>\n <link state=\'up\'/>\n <mtu size=\'1500\'/>\n <alias name=\'ua-827b6719-b206-46fe-b214-f6e15649abad\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x01\' slot=\'0x00\' function=\'0x0\'/>\n </interface>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/va
r/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.ovirt-guest-agent.0\'/>\n <target type=\'virtio\' name=\'ovirt-guest-agent.0\' state=\'connected\'/>\n <alias name=\'channel0\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'1\'/>\n </channel>\n <channel type=\'unix\'>\n <source mode=\'bind\' path=\'/var/lib/libvirt/qemu/channels/1fd47e75-d708-43e4-ac0f-67bd28dceefd.org.qemu.guest_agent.0\'/>\n <target type=\'virtio\' name=\'org.qemu.guest_agent.0\' state=\'connected\'/>\n <alias name=\'channel1\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'2\'/>\n </channel>\n <channel type=\'spicevmc\'>\n <target type=\'virtio\' name=\'com.redhat.spice.0\' state=\'disconnected\'/>\n <alias name=\'channel2\'/>\n <address type=\'virtio-serial\' controller=\'0\' bus=\'0\' port=\'3\'/>\n </channel>\n <input type=\'tablet\' bus=\'usb\'>\n <alias name=\'input0\'/>\n
<address type=\'usb\' bus=\'0\' port=\'1\'/>\n </input>\n <input type=\'mouse\' bus=\'ps2\'>\n <alias name=\'input1\'/>\n </input>\n <input type=\'keyboard\' bus=\'ps2\'>\n <alias name=\'input2\'/>\n </input>\n <graphics type=\'spice\' port=\'5964\' tlsPort=\'5968\' autoport=\'yes\' listen=\'10.1.2.199\' passwdValidTo=\'1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n <channel name=\'main\' mode=\'secure\'/>\n <channel name=\'display\' mode=\'secure\'/>\n <channel name=\'inputs\' mode=\'secure\'/>\n <channel name=\'cursor\' mode=\'secure\'/>\n <channel name=\'playback\' mode=\'secure\'/>\n <channel name=\'record\' mode=\'secure\'/>\n <channel name=\'smartcard\' mode=\'secure\'/>\n <channel name=\'usbredir\' mode=\'secure\'/>\n </graphics>\n <graphics type=\'vnc\' port=\'5969\' autoport=\'yes\' listen=\'10.1.2.199\' keymap=\'en-us\' passwdValidTo=\'
1970-01-01T00:00:01\'>\n <listen type=\'network\' address=\'10.1.2.199\' network=\'vdsm-ovirtmgmt\'/>\n </graphics>\n <video>\n <model type=\'qxl\' ram=\'65536\' vram=\'32768\' vgamem=\'16384\' heads=\'1\' primary=\'yes\'/>\n <alias name=\'ua-703e6733-2d30-4b7d-b51a-173ba8e8348b\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x00\' slot=\'0x01\' function=\'0x0\'/>\n </video>\n <memballoon model=\'virtio\'>\n <stats period=\'5\'/>\n <alias name=\'ua-24df8012-5e0f-4b31-bc6a-0783da1963ee\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x06\' slot=\'0x00\' function=\'0x0\'/>\n </memballoon>\n <rng model=\'virtio\'>\n <backend model=\'random\'>/dev/urandom</backend>\n <alias name=\'ua-2f495af7-c0c2-49e4-b34f-bd50306f6871\'/>\n <address type=\'pci\' domain=\'0x0000\' bus=\'0x07\' slot=\'0x00\' function=\'0x0\'/>\n </rng>\n </devices>\n <seclabel type=\'dynamic\' model=\'selinux\' relabel=\'yes\'>\n <label
>system_u:system_r:svirt_t:s0:c229,c431</label>\n <imagelabel>system_u:object_r:svirt_image_t:s0:c229,c431</imagelabel>\n </seclabel>\n <seclabel type=\'dynamic\' model=\'dac\' relabel=\'yes\'>\n <label>+107:+107</label>\n <imagelabel>+107:+107</imagelabel>\n </seclabel>\n <qemu:capabilities>\n <qemu:add capability=\'blockdev\'/>\n <qemu:add capability=\'incremental-backup\'/>\n </qemu:capabilities>\n</domain>\n'}} from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:54)
2021-08-03 07:11:53,656-0700 INFO (libvirt/events) [vds] Channel state for vm_id=1fd47e75-d708-43e4-ac0f-67bd28dceefd changed from=UNKNOWN(-1) to=disconnected(2) (qemuguestagent:289)
2021-08-03 07:11:55,724-0700 INFO (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') underlying process disconnected (vm:1134)
2021-08-03 07:11:55,724-0700 INFO (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Release VM resources (vm:5313)
2021-08-03 07:11:55,724-0700 INFO (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Stopping connection (guestagent:438)
2021-08-03 07:11:55,725-0700 INFO (libvirt/events) [vdsm.api] START teardownImage(sdUUID='e8ec5645-fc1b-4d64-a145-44aa8ac5ef48', spUUID='2948c860-9bdf-11e8-a6b3-00163e0419f0', imgUUID='eb15970b-7b94-4cce-ab44-50f57850aa7f', volUUID=None) from=internal, task_id=546c50e4-8889-47f3-b9ea-4c4bd8a71148 (api:48)
2021-08-03 07:11:55,725-0700 INFO (libvirt/events) [storage.StorageDomain] Removing image rundir link '/run/vdsm/storage/e8ec5645-fc1b-4d64-a145-44aa8ac5ef48/eb15970b-7b94-4cce-ab44-50f57850aa7f' (fileSD:601)
2021-08-03 07:11:55,725-0700 INFO (libvirt/events) [vdsm.api] FINISH teardownImage return=None from=internal, task_id=546c50e4-8889-47f3-b9ea-4c4bd8a71148 (api:54)
2021-08-03 07:11:55,725-0700 INFO (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Stopping connection (guestagent:438)
2021-08-03 07:11:55,726-0700 WARN (libvirt/events) [root] Attempting to remove a non existing net user: ovirtmgmt/1fd47e75-d708-43e4-ac0f-67bd28dceefd (libvirtnetwork:207)
2021-08-03 07:11:55,726-0700 WARN (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') timestamp already removed from stats cache (vm:2539)
2021-08-03 07:11:55,726-0700 INFO (libvirt/events) [vdsm.api] START inappropriateDevices(thiefId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') from=internal, task_id=5c856e64-6b29-41aa-bc29-704ea86f9c64 (api:48)
2021-08-03 07:11:55,727-0700 INFO (libvirt/events) [vdsm.api] FINISH inappropriateDevices return=None from=internal, task_id=5c856e64-6b29-41aa-bc29-704ea86f9c64 (api:54)
2021-08-03 07:11:55,731-0700 WARN (vm/1fd47e75) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Couldn't destroy incoming VM: Domain not found: no domain with matching uuid '1fd47e75-d708-43e4-ac0f-67bd28dceefd' (vm:4046)
2021-08-03 07:11:55,732-0700 INFO (vm/1fd47e75) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Changed state to Down: VM destroyed during the startup (code=10) (vm:1895)
2021-08-03 07:11:55,733-0700 INFO (vm/1fd47e75) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Stopping connection (guestagent:438)
2021-08-03 07:11:55,737-0700 INFO (jsonrpc/1) [api.virt] START destroy(gracefulAttempts=1) from=::ffff:10.1.2.37,39674, flow_id=28d98b26, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:48)
2021-08-03 07:11:55,811-0700 INFO (jsonrpc/2) [api.virt] START destroy(gracefulAttempts=1) from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:48)
2021-08-03 07:11:55,833-0700 INFO (libvirt/events) [root] /usr/libexec/vdsm/hooks/after_vm_destroy/50_vhostmd: rc=0 err=b'' (hooks:122)
2021-08-03 07:11:55,909-0700 INFO (libvirt/events) [root] /usr/libexec/vdsm/hooks/after_vm_destroy/delete_vhostuserclient_hook: rc=0 err=b'' (hooks:122)
2021-08-03 07:11:55,909-0700 WARN (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') trying to set state to Down when already Down (vm:701)
2021-08-03 07:11:55,910-0700 INFO (libvirt/events) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Stopping connection (guestagent:438)
2021-08-03 07:11:55,910-0700 INFO (jsonrpc/1) [virt.vm] (vmId='1fd47e75-d708-43e4-ac0f-67bd28dceefd') Can't undefine disconnected VM '1fd47e75-d708-43e4-ac0f-67bd28dceefd' (vm:2533)
2021-08-03 07:11:55,910-0700 INFO (jsonrpc/1) [api.virt] FINISH destroy return={'status': {'code': 0, 'message': 'Machine destroyed'}} from=::ffff:10.1.2.37,39674, flow_id=28d98b26, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:54)
2021-08-03 07:11:55,910-0700 INFO (jsonrpc/2) [api] FINISH destroy error=Virtual machine does not exist: {'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd'} (api:129)
2021-08-03 07:11:55,910-0700 INFO (jsonrpc/2) [api.virt] FINISH destroy return={'status': {'code': 1, 'message': "Virtual machine does not exist: {'vmId': '1fd47e75-d708-43e4-ac0f-67bd28dceefd'}"}} from=::ffff:10.1.2.199,57742, vmId=1fd47e75-d708-43e4-ac0f-67bd28dceefd (api:54)
2021-08-03 07:11:55,911-0700 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call VM.destroy failed (error 1) in 0.10 seconds (__init__:312)
3
4
hello
I have a problem to log in to ovirt-engine manager in my browser
the warning message in the browser display me this text:
PKIX path validation failed: java.security.cert.CertPathValidatorException:
validity check failed
to solve this problem I am offered to run engine-setup
and here is a question: the engine-setup will have no impact to the
hosts(hypervisors) working?
ovirt version 4.4.4.7-1.el8
thanks
7
7
Hi folks,
today I got a problem with vdsm and selinux after updating a host:
[root@host04 ~]# nodectl check
Status: WARN
Bootloader ... OK
Layer boot entries ... OK
Valid boot entries ... OK
Mount points ... OK
Separate /var ... OK
Discard is used ... OK
Basic storage ... OK
Initialized VG ... OK
Initialized Thin Pool ... OK
Initialized LVs ... OK
Thin storage ... OK
Checking available space in thinpool ... OK
Checking thinpool auto-extend ... OK
vdsmd ... BAD
So I run:
[root@host04 ~]# /usr/libexec/vdsm/vdsmd_init_common.sh --pre-start
vdsm: Running mkdirs
vdsm: Running configure_vdsm_logs
vdsm: Running run_init_hooks
vdsm: Running check_is_configured
lvm is configured for vdsm
Current revision of multipath.conf detected, preserving
Managed volume database is already configured
abrt is already configured for vdsm
libvirt is already configured for vdsm
sanlock is configured for vdsm
Modules sebool are not configured
Error:
One of the modules is not configured to work with VDSM.
To configure the module use the following:
'vdsm-tool configure [--module module-name]'.
If all modules are not configured try to use:
'vdsm-tool configure --force'
(The force flag will stop the module's service and start it
afterwards automatically to load the new configuration.)
vdsm: stopped during execute check_is_configured task (task returned
with error code 1).
But also runnining this gave me an error:
[root@host04 ~]# vdsm-tool configure --module sebool
Checking configuration status...
Running configure...
libsepol.context_from_record: type cloud_what_var_cache_t is not defined
libsepol.context_from_record: could not create context structure
libsepol.context_from_string: could not create context structure
libsepol.sepol_context_to_sid: could not convert
system_u:object_r:cloud_what_var_cache_t:s0 to sid
invalid context system_u:object_r:cloud_what_var_cache_t:s0
libsemanage.semanage_validate_and_compile_fcontexts: setfiles returned
error code 255.
Traceback (most recent call last):
File "/usr/bin/vdsm-tool", line 209, in main
return tool_command[cmd]["command"](*args)
File "/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py", line
40, in wrapper
func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py",
line 145, in configure
_configure(c)
File "/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py",
line 92, in _configure
getattr(module, 'configure', lambda: None)()
File
"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py",
line 88, in configure
_setup_booleans(True)
File
"/usr/lib/python3.6/site-packages/vdsm/tool/configurators/sebool.py",
line 60, in _setup_booleans
sebool_obj.finish()
File "/usr/lib/python3.6/site-packages/seobject.py", line 340, in finish
self.commit()
File "/usr/lib/python3.6/site-packages/seobject.py", line 330, in commit
rc = semanage_commit(self.sh)
OSError: [Errno 0] Error
I managed to solve this by running:
[root@host04 ~]# semodule -i
/usr/share/selinux/packages/ovirt-vmconsole/ovirt_vmconsole.pp
[root@host04 ~]# vdsm-tool configure --module sebool
Checking configuration status...
Running configure...
Done configuring modules to VDSM.
Regards
--
gb
PGP Key: http://pgp.mit.edu/
Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90 B9CB 0F34
3
3
So I'm running a fresh install of oVirt on a new Centos Stream node. Fresh install.
I installed the OS with bonded interfaces. I bonded them during the install via anaconda.
I followed the doc here: https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_us…
When I got to the hosted-engine --deploy step, it errored out saying, "Only Team devices are present. Teaming is unsupported."
However, I'm not teaming my network adapters at all. I'm bonding them:
[root@mustafar ~]# cat /etc/sysconfig/network-scripts/ifcfg-Bond_connection_1
BONDING_OPTS="mode=balance-rr downdelay=0 miimon=1 updelay=0"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="Bond connection 1"
UUID=[redacted]
DEVICE=bond0
ONBOOT=yes
IPADDR=192.168.5.83
PREFIX=24
GATEWAY=192.168.5.1
DNS1=192.168.5.2
DNS2=192.168.5.3
DNS3=192.168.5.4
DOMAIN=[redacted]
[root@mustafar ~]#
What gives with this?
8
11
Hi,
I have a fresh Ovirt installation (4.4.10.7-1.el8 engine and oVirt Node
4.4.10) on a Dell VRTX chassis. There are 3 blades, two of them are
identical hardware (PowerEdge M630) and the third is a little newer
(PowerEdge M640). The third has different CPUs, more RAM, and slower
NICs. I also have a bunch of data domains some on the shared PERC
internal storage and others on an external iSCSI storage, all seems
configured correctly and all the hosts are operational.
I can migrate a VM back and forth from the first two blades without any
problem, I can migrate a VM to the third blade but when I migrate a VM
from the third blade to any of the other two the task terminate
successfully, the VM is marked as up on the target host but the VM
hangs, the console is frozen and the VM stops to respond to ping.
I have no clues about why this is happening and I'm looking for
suggestions about how to debug and hopefully fix this issue.
Thanks in advance
--
gb
PGP Key: http://pgp.mit.edu/
Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90 B9CB 0F34
3
2
Hello, please how to increase extend resize disk of VM?
I can working with ansible or REST API.
Ansible working is here, but i not found manual for update size: https://docs.ansible.com/ansible/2.3/ovirt_disks_module.html
On official ovirt documentation i cant found how to update. I found only old manual in KB page, but not working on lastest: https://www.ovirt.org/develop/release-management/features/storage/online-vi…
PUT /api/vms/{VM_ID}/disks/{DISK_ID} HTTP/1.1
Accept: application/xml
Content-type: application/xml
<disk>
<size>{NEW_SIZE_IN_BYTES}</size>
</disk>
Thanks for advice.
5
13

21 Jun '22
Hello, I have an issue probably related to my particular implementation but I think some controls are missing
Here is the story.
I have a cluster of two nodes 4.4.10.3 with an upgraded kernel as the cpu (Ryzen 5) suffer from an incompatibility issue with the kernel provided by 4.4.10.x series.
On each node there are three glusterfs "partitions" in replica mode, one for the hosted_engine, the other two are for user usage.
The third node was an old i3 workstation only used to provide the arbiter partition to the glusterfs cluster.
I installed a new server (ryzen processor) with 4.5.0 and successfully installed glusterfs 10.1 and inserted the arbiter bricks implemented on glusterfs 10.1 while the replica bricks are 8.6 after removing the old i3 provided bricks.
I successfully imported the new node in the ovirt engine (after updating the engine to 4.5)
The problem is that the ovirt-ha-broker doesn't start complaining that is not possible to connect the storage. (I suppose the hosted_engine storage) so I did some digs that I'm going to show here:
####
1. The node seem to be correctly configured:
[root@ovirt-node3 devices]# vdsm-tool validate-config
SUCCESS: ssl configured to true. No conflicts
[root@ovirt-node3 devices]# vdsm-tool configure
Checking configuration status...
libvirt is already configured for vdsm
SUCCESS: ssl configured to true. No conflicts
sanlock is configured for vdsm
Managed volume database is already configured
lvm is configured for vdsm
Current revision of multipath.conf detected, preserving
Running configure...
Done configuring modules to VDSM.
[root@ovirt-node3 devices]# vdsm-tool validate-config
SUCCESS: ssl configured to true. No conflicts
####
2. the node refuses to mount via hosted-engine (same error in broker.log)
[root@ovirt-node3 devices]# hosted-engine --connect-storage
Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/connect_storage_server.py", line 30, in <module>
timeout=ohostedcons.Const.STORAGE_SERVER_TIMEOUT,
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 312, in connect_storage_server
sserver.connect_storage_server(timeout=timeout)
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_server.py", line 451, in connect_storage_server
'Connection to storage server failed'
RuntimeError: Connection to storage server failed
#####
3. manually mount of glusterfs work correctly
[root@ovirt-node3 devices]# grep storage /etc/ovirt-hosted-engine/hosted-engine.conf
storage=ovirt-node2.ovirt:/gveng
# The following are used only for iSCSI storage
[root@ovirt-node3 devices]#
[root@ovirt-node3 devices]# mount -t glusterfs ovirt-node2.ovirt:/gveng /mnt/tmp/
[root@ovirt-node3 devices]# ls -l /mnt/tmp
total 0
drwxr-xr-x. 6 vdsm kvm 64 Dec 15 19:04 7b8f1cc9-e3de-401f-b97f-8c281ca30482
What else should I control? Thank you and sorry for the long message
Diego
5
9

After attaching the Storage domain, the VMs are disappeared from the VM import
by aminur.rahman@iongroup.com 20 Jun '22
by aminur.rahman@iongroup.com 20 Jun '22
20 Jun '22
Hi,
We're noticing some weird issue while re-attaching the storage domain. After re-attach the storage domain, some VMs are completely missing from the VM Import. Before detaching the storage domain, all the VMs were shutdown gracefully.
I also noticed some disks are exists with no Alias under the disk import on the storage domain and I can't import those disks. Its failed to register the disk with <UNKONOWN> error.
We're using Ovirt 4.2 with multiple Dell hosts in the cluster and Compellent SAN with iSCSI volumes.
Please kindly advise if I am missing anything before detach the storage domain.
Thanks
4
4
I cannot log into oVirt Manager. My browser gave me a warning that the site's certificate has expired. Then when I try to log in, I receive the following error message:
"PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed"
How can I fix this problem? In advance, thank you for your help.
hosted-engine: v4.4.8.6
hosts: oVirt Node v4.4.8.3
7
24
Hi,
I was used to use the vmconsole proxy, but since a while, I'm getting
this issue (currently 4.4.5):
# ssh -t -p 2222 ovirt-vmconsole(a)air.v100.abes.fr connect
ovirt-vmconsole(a)air.v100.abes.fr: Permission denied (publickey).
I found following in the engine.log
2021-04-15 17:55:43,094+02 ERROR
[org.ovirt.engine.core.services.VMConsoleProxyServlet] (default task-4)
[] Error validating ticket: :
sun.security.provider.certpath.SunCertPathBuilderException: unable to
find valid certification path to requested target
at
java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at
java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at
java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297)
at
org.ovirt.engine.core.uutils//org.ovirt.engine.core.uutils.crypto.CertificateChain.buildCertPath(CertificateChain.java:128)
at
org.ovirt.engine.core.uutils//org.ovirt.engine.core.uutils.crypto.ticket.TicketDecoder.decode(TicketDecoder.java:89)
at
deployment.engine.ear.services.war//org.ovirt.engine.core.services.VMConsoleProxyServlet.validateTicket(VMConsoleProxyServlet.java:175)
at
deployment.engine.ear.services.war//org.ovirt.engine.core.services.VMConsoleProxyServlet.doPost(VMConsoleProxyServlet.java:225)
The user key is the good one, I use the same with my other engines and I
can successfully connect to vm consoles.
Thank you for helping
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
5
16

looking for some ISV backup software which integrates the backup API
by Nathanaël Blanchet 08 Jun '22
by Nathanaël Blanchet 08 Jun '22
08 Jun '22
Hello,
We are about to change our backup provider, and I find it is a great
chance to choose a full supported ovirt backup solution.
I currently use this python script vm-backup-scheduler
(https://github.com/wefixit-AT/oVirtBackup) but it is not the workflow
officially suggested by the community
(https://www.ovirt.org/develop/release-management/features/storage/backup-re…)
I've been looking for a long time an ISV who supports such an API, but
the only one I found is this one :
Acronis Backup Advanced suggested here
https://access.redhat.com/ecosystem/search/#/ecosystem/Red%20Hat%20Enterpri…
I ran the trial version, but it doesn't seem to do better than the
vm-backup-scheduler script, and it doesn't seem to use the backup API
(attach a clone as a disk to an existing vm).
Can you suggest me some other ISV solutions, if they ever exist... or
share me your backup experience?
5
4
Hello everyone,
I have a replica 2 + arbiter installation and this morning the Hosted Engine gave the following error on the UI and resumed on a different node (node3) than the one it was originally running(node1). (The original node has more memory than the one it ended up, but it had a better memory usage percentage at the time). Also, the only way I discovered the migration had happened and there was an Error in Events, was because I logged in the web interface of ovirt for a routine inspection. Βesides that, everything was working properly and still is.
The error that popped is the following:
VM HostedEngine is down with error. Exit message: internal error: qemu unexpectedly closed the monitor:
2020-09-01T06:49:20.749126Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
2020-09-01T06:49:20.927274Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2,id=ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2,bootindex=1,write-cache=on: Failed to get "write" lock
Is another process using the image?.
Which from what I could gather concerns the following snippet from the HostedEngine.xml and it's the virtio disk of the Hosted Engine:
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads' iothread='1'/>
<source file='/var/run/vdsm/storage/80f6e393-9718-4738-a14a-64cf43c3d8c2/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7'>
<seclabel model='dac' relabel='no'/>
</source>
<target dev='vda' bus='virtio'/>
<serial>d5de54b6-9f8e-4fba-819b-ebf6780757d2</serial>
<alias name='ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
I've tried looking into the logs and the sar command but I couldn't find anything to relate with the above errors and determining the reason for it to happen. Is this a Gluster or a QEMU problem?
The Hosted Engine was manually migrated five days before on node1.
Is there a standard practice I could follow to determine what happened and secure my system?
Thank you very much for your time,
Maria Souvalioti
4
6
Hi all.
Is it possible to configure oVirt for work with two NICs in bond/LACP
across two switches, according to the image below?
[image: LACP_Across_Two_Switchs.png]
Thank you all.
You guys do a wonderful job.
--
Att,
Jorge Visentini
+55 55 98432-9868
5
5

new host addition to OVN cluster fails with Connectivity check failed, rolling back
by ravi k 19 May '22
by ravi k 19 May '22
19 May '22
Hello,
I have a a oVirt 4.3 installation with two clusters. One of the cluster has switch type as OVS. I'm trying to add a second host to this cluster. I did a clean install of the OS, configured bond0 and bond0.2306 as the VLAN interface. I was able to add the host to the cluster.
When I go to setup networks and drag ovirtmgmt onto bond0, I notice in the host that it was able to create the vdsm and br_int bridges, create ovirtmgmt interface and assign the IP on top of it. However I also notice that the bond0.2306 interface also exists with the ip assigned. Then it rolls back the config removing the bridges. I checked the supervdsm log and see that it's rolling back because "connectivity::48::root::(check) Connectivity check failed, rolling back"
I'm pasting the relevant lines from supervdsm below
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:53,381::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return network_caps with {'bridges': {}, 'bondings': {'bond0': {'ipv4addrs': [], 'active_slave': '', 'ad_aggregator_id': '1', 'netmask': '', 'ad_partner_mac': '44:38:39:ff:01:33', 'hwaddr': '7c:d3:0a:60:e9:48', 'speed': 20000, 'gateway': '', 'ipv6autoconf': True, 'addr': '', 'dhcpv6': False, 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'switch': 'legacy', 'ipv4defaultroute': False, 'slaves': ['eno1', 'eno2'], 'ipv6gateway': '::', 'opts': {'mode': '4'}}}, 'nameservers': ['10.222.0.6', '10.333.0.6'], 'nics': {'eno1': {'permhwaddr': '7c:d3:0a:60:e9:48', 'ipv6autoconf': True, 'addr': '', 'speed': 10000, 'dhcpv6': False, 'ipv6addrs': [], 'ad_aggregator_id': '1', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute': False, 'ipv4addrs': [], 'hwaddr': '7c:d3:0a:60:e9:48', 'mtu': '9000', 'ipv6gateway': '::', 'gateway': ''}, 'eno2': {'permhwaddr': '7c:d3:0a:60:e9:49', 'ipv6autoco
nf': True, 'addr': '', 'speed': 10000, 'dhcpv6': False, 'ipv6addrs': [], 'ad_aggregator_id': '1', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute': False, 'ipv4addrs': [], 'hwaddr': '7c:d3:0a:60:e9:48', 'mtu': '9000', 'ipv6gateway': '::', 'gateway': ''}, 'eno3': {'ipv6autoconf': True, 'addr': '', 'speed': 10000, 'dhcpv6': False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute': False, 'ipv4addrs': [], 'hwaddr': '7c:d3:0a:60:e9:4a', 'ipv6gateway': '::', 'gateway': ''}, 'eno4': {'ipv6autoconf': True, 'addr': '', 'speed': 10000, 'dhcpv6': False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute': False, 'ipv4addrs': [], 'hwaddr': '7c:d3:0a:60:e9:4b', 'ipv6gateway': '::', 'gateway': ''}, 'enp0s20f0u1u6': {'ipv6autoconf': True, 'addr': '', 'speed': 0, 'dhcpv6': False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute': False, 'ipv4addrs': [], 'hwaddr': '7e:d3:0a:60:e9:4f', 'ipv6gateway': '
::', 'gateway': ''}}, 'supportsIPv6': True, 'vlans': {'bond0.2306': {'iface': 'bond0', 'ipv6autoconf': True, 'addr': '10.119.6.237', 'dhcpv6': False, 'ipv6addrs': [], 'vlanid': 2306, 'mtu': '9000', 'dhcpv4': False, 'netmask': '255.255.255.0', 'ipv4defaultroute': True, 'ipv4addrs': ['10.119.6.237/24'], 'ipv6gateway': '::', 'gateway': '10.119.6.1'}}, 'networks': {}}
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:54,243::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call get_pti with () {}
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:54,243::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return get_pti with -1
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:54,244::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call get_retp with () {}
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:54,244::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return get_retp with -1
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:54,244::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call get_ibrs with () {}
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:54,245::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return get_ibrs with 1
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:54,245::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call get_ssbd with () {}
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:54,245::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return get_ssbd with -1
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:54,246::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call check_qemu_conf_contains with ('vnc_tls', '1') {}
MainProcess|jsonrpc/1::DEBUG::2022-04-21 11:26:54,250::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return check_qemu_conf_contains with True
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,304::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call setupNetworks with ({u'ovirtmgmt': {u'ipv6autoconf': True, u'vlan': u'2306', u'ipaddr': u'10.119.6.237', u'switch': u'ovs', u'mtu': 9000, u'bonding': u'bond0', u'dhcpv6': False, u'STP': u'no', u'bridged': u'true', u'netmask': u'255.255.255.0', u'gateway': u'10.119.6.1', u'defaultRoute': True}}, {}, {u'connectivityCheck': u'true', u'connectivityTimeout': 120, u'commitOnSuccess': False}) {}
MainProcess|jsonrpc/2::INFO::2022-04-21 11:26:55,305::api::211::root::(setupNetworks) Setting up network according to configuration: networks:{u'ovirtmgmt': {u'ipv6autoconf': True, u'vlan': u'2306', u'ipaddr': u'10.119.6.237', u'bonding': u'bond0', u'mtu': 9000, u'switch': u'ovs', u'dhcpv6': False, u'STP': u'no', u'bridged': u'true', u'netmask': u'255.255.255.0', u'gateway': u'10.119.6.1', u'defaultRoute': True}}, bondings:{}, options:{u'connectivityCheck': u'true', u'connectivityTimeout': 120, u'commitOnSuccess': False}
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,323::cmdutils::133::root::(exec_cmd) /sbin/tc qdisc show (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,342::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,408::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,408::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,445::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::INFO::2022-04-21 11:26:55,459::netconfpersistence::69::root::(setBonding) Adding bond0({'nics': ['eno1', 'eno2'], 'switch': 'legacy', 'options': 'mode=4'})
MainProcess|jsonrpc/2::INFO::2022-04-21 11:26:55,460::netconfpersistence::58::root::(setNetwork) Adding network ovirtmgmt({u'ipv6autoconf': True, 'nameservers': ['10.222.0.6', '10.333.0.6'], u'vlan': 2306, u'ipaddr': u'10.119.6.237', u'switch': u'ovs', u'mtu': 9000, u'bonding': u'bond0', u'dhcpv6': False, 'stp': False, u'bridged': True, u'netmask': u'255.255.255.0', u'gateway': u'10.119.6.1', u'defaultRoute': True, 'bootproto': 'none'})
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,462::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-95 /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,909::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::INFO::2022-04-21 11:26:55,909::hooks::114::root::(_runHooksDir) /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe: rc=0 err=
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,911::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,911::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,947::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,953::configurator::265::root::(_remove_networks) Removing networks: []
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,955::setup::41::root::(remove_bonds) Removing bonds: []
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,958::ifacquire::70::root::(acquire) Acquiring ifaces: set([u'eno1', u'eno2', u'bond0'])
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:55,959::cmdutils::133::root::(exec_cmd) /sbin/ifdown eno1 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:56,606::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:56,609::cmdutils::133::root::(exec_cmd) /sbin/ifdown eno2 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:57,268::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:57,271::cmdutils::133::root::(exec_cmd) /sbin/ifdown bond0 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:58,940::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:58,940::setup::61::root::(edit_bonds) Editing bonds: [u'bond0']
MainProcess|jsonrpc/2::INFO::2022-04-21 11:26:58,946::netconfpersistence::69::root::(setBonding) Adding bond0({'nics': [], 'switch': u'ovs', 'options': {u'mode': '4'}})
MainProcess|jsonrpc/2::INFO::2022-04-21 11:26:58,949::sysfs_driver::104::root::(set_options) Bond bond0 options set: {u'mode': u'4'}.
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:58,950::cmdutils::133::root::(exec_cmd) /sbin/ip link set dev eno1 down (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:58,956::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::INFO::2022-04-21 11:26:59,000::sysfs_driver::78::root::(add_slaves) Slave eno1 has been added to bond bond0.
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,001::cmdutils::133::root::(exec_cmd) /sbin/ip link set dev eno2 down (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,009::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::INFO::2022-04-21 11:26:59,053::sysfs_driver::78::root::(add_slaves) Slave eno2 has been added to bond bond0.
MainProcess|jsonrpc/2::INFO::2022-04-21 11:26:59,053::netconfpersistence::69::root::(setBonding) Adding bond0({u'hwaddr': u'7c:d3:0a:60:e9:48', u'nics': [u'eno1', u'eno2'], u'switch': u'ovs', u'options': u'mode=4'})
netlink/events::DEBUG::2022-04-21 11:26:59,055::concurrent::258::root::(run) START thread <Thread(netlink/events, started daemon 140363790243584)> (func=<bound method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object at 0x7fa8fe64d610>>, args=(), kwargs={})
netlink/events::DEBUG::2022-04-21 11:26:59,057::concurrent::258::root::(run) START thread <Thread(netlink/events, started daemon 140363781850880)> (func=<bound method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object at 0x7fa8fe5e9550>>, args=(), kwargs={})
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,058::cmdutils::133::root::(exec_cmd) /sbin/ip link set dev bond0 up (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,067::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
netlink/events::DEBUG::2022-04-21 11:26:59,071::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 140363781850880)>
netlink/events::DEBUG::2022-04-21 11:26:59,073::concurrent::258::root::(run) START thread <Thread(netlink/events, started daemon 140363781850880)> (func=<bound method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object at 0x7fa8fe5e91d0>>, args=(), kwargs={})
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,074::cmdutils::133::root::(exec_cmd) /sbin/ip link set dev eno1 up (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,081::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
netlink/events::DEBUG::2022-04-21 11:26:59,083::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 140363781850880)>
netlink/events::DEBUG::2022-04-21 11:26:59,084::concurrent::258::root::(run) START thread <Thread(netlink/events, started daemon 140363781850880)> (func=<bound method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object at 0x7fa8fe5e91d0>>, args=(), kwargs={})
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,085::cmdutils::133::root::(exec_cmd) /sbin/ip link set dev eno2 up (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,090::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
netlink/events::DEBUG::2022-04-21 11:26:59,092::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 140363781850880)>
netlink/events::DEBUG::2022-04-21 11:26:59,093::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 140363790243584)>
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,219::cmdutils::133::root::(exec_cmd) /sbin/ip addr flush dev eno1 scope global (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,225::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,340::cmdutils::133::root::(exec_cmd) /sbin/ip addr flush dev eno2 scope global (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,346::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,346::setup::92::root::(add_bonds) Creating bonds: []
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,346::configurator::274::root::(_add_networks) Adding networks: [u'ovirtmgmt']
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,347::ifacquire::70::root::(acquire) Acquiring ifaces: set([u'bond0'])
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:26:59,348::cmdutils::133::root::(exec_cmd) /sbin/ifdown eno1 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:00,004::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:00,006::cmdutils::133::root::(exec_cmd) /sbin/ifdown eno2 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:00,657::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:00,660::cmdutils::133::root::(exec_cmd) /sbin/ifdown bond0 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,360::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,361::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- add-br vdsmbr_TtLDFxN1 -- set Bridge vdsmbr_TtLDFxN1 other-config:hwaddr="7a:b2:a6:5b:96:78" -- add-port vdsmbr_TtLDFxN1 bond0 -- set Port bond0 other_config:vdsm_level=southbound -- add-port vdsmbr_TtLDFxN1 ovirtmgmt -- set Port ovirtmgmt other_config:vdsm_level=northbound -- set Interface ovirtmgmt type=internal -- set Port ovirtmgmt tag=2306 -- set Interface ovirtmgmt mtu_request=9000 -- set Interface bond0 mtu_request=9000
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,361::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- add-br vdsmbr_TtLDFxN1 -- set Bridge vdsmbr_TtLDFxN1 'other-config:hwaddr="7a:b2:a6:5b:96:78"' -- add-port vdsmbr_TtLDFxN1 bond0 -- set Port bond0 other_config:vdsm_level=southbound -- add-port vdsmbr_TtLDFxN1 ovirtmgmt -- set Port ovirtmgmt other_config:vdsm_level=northbound -- set Interface ovirtmgmt type=internal -- set Port ovirtmgmt tag=2306 -- set Interface ovirtmgmt mtu_request=9000 -- set Interface bond0 mtu_request=9000 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,704::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::INFO::2022-04-21 11:27:02,705::netconfpersistence::58::root::(setNetwork) Adding network ovirtmgmt({u'ipv6autoconf': True, u'nameservers': [u'10.129.0.60', u'10.229.0.60'], u'vlan': 2306, u'ipaddr': u'10.119.6.237', u'bonding': u'bond0', u'mtu': 9000, u'switch': u'ovs', u'dhcpv6': False, u'stp': False, u'bridged': True, u'netmask': u'255.255.255.0', u'gateway': u'10.119.6.1', u'defaultRoute': True, u'bootproto': u'none'})
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,705::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,706::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,745::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,753::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- set open . external-ids:ovn-bridge-mappings="ovirtmgmt:vdsmbr_TtLDFxN1"
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,753::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- set open . 'external-ids:ovn-bridge-mappings="ovirtmgmt:vdsmbr_TtLDFxN1"' (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,794::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
netlink/events::DEBUG::2022-04-21 11:27:02,796::concurrent::258::root::(run) START thread <Thread(netlink/events, started daemon 140363790243584)> (func=<bound method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object at 0x7fa8fe64d590>>, args=(), kwargs={})
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,797::cmdutils::133::root::(exec_cmd) /sbin/ip link set dev ovirtmgmt up (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,805::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
netlink/events::DEBUG::2022-04-21 11:27:02,807::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 140363790243584)>
netlink/events::DEBUG::2022-04-21 11:27:02,808::concurrent::258::root::(run) START thread <Thread(netlink/events, started daemon 140363790243584)> (func=<bound method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object at 0x7fa8fe64d590>>, args=(), kwargs={})
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,809::cmdutils::133::root::(exec_cmd) /sbin/ip link set dev bond0 up (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,816::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
netlink/events::DEBUG::2022-04-21 11:27:02,819::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 140363790243584)>
netlink/events::DEBUG::2022-04-21 11:27:02,820::concurrent::258::root::(run) START thread <Thread(netlink/events, started daemon 140363790243584)> (func=<bound method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object at 0x7fa8fe64d590>>, args=(), kwargs={})
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,821::cmdutils::133::root::(exec_cmd) /sbin/ip link set dev eno1 up (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,867::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
netlink/events::DEBUG::2022-04-21 11:27:02,870::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 140363790243584)>
netlink/events::DEBUG::2022-04-21 11:27:02,871::concurrent::258::root::(run) START thread <Thread(netlink/events, started daemon 140363790243584)> (func=<bound method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object at 0x7fa8fe64d590>>, args=(), kwargs={})
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,872::cmdutils::133::root::(exec_cmd) /sbin/ip link set dev eno2 up (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,920::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
netlink/events::DEBUG::2022-04-21 11:27:02,922::concurrent::261::root::(run) FINISH thread <Thread(netlink/events, stopped daemon 140363790243584)>
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,923::cmdutils::133::root::(exec_cmd) /sbin/ip addr flush dev ovirtmgmt scope global (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,930::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,931::cmdutils::133::root::(exec_cmd) /sbin/ip -4 addr add dev ovirtmgmt 10.119.6.237/255.255.255.0 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,938::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,938::cmdutils::133::root::(exec_cmd) /sbin/ip -4 route add default via 10.119.6.1 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,945::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:27:02,947::connectivity::46::root::(check) Checking connectivity...
MainProcess|jsonrpc/3::DEBUG::2022-04-21 11:27:10,538::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call ksmTune with ({u'run': 0, u'merge_across_nodes': 1},) {}
MainProcess|jsonrpc/3::DEBUG::2022-04-21 11:27:10,538::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None
MainProcess|jsonrpc/2::INFO::2022-04-21 11:29:04,075::connectivity::48::root::(check) Connectivity check failed, rolling back
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:04,075::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 136 seconds (max pending: 3)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:04,075::ifacquire::76::root::(_rollback) Acquiring transaction failed, reverting ifaces: [u'eno1', u'eno2', u'bond0']
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:04,077::cmdutils::133::root::(exec_cmd) /sbin/ifup eno1 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:04,485::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:04,488::cmdutils::133::root::(exec_cmd) /sbin/ifup eno2 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:04,897::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:04,900::cmdutils::133::root::(exec_cmd) /sbin/ifup bond0 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,045::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = './network-functions: line 119: .: /etc/sysconfig/network-scripts/: is a directory\n'; <rc> = 0
MainProcess|jsonrpc/2::WARNING::2022-04-21 11:29:06,046::netconfpersistence::294::root::(__exit__) Failed setup transaction,reverting to last known good network.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 249, in _setup_ovs
connectivity.check(options)
File "/usr/lib/python2.7/site-packages/vdsm/network/connectivity.py", line 50, in check
'connectivity check failed')
ConfigNetworkError: (10, 'connectivity check failed')
MainProcess|jsonrpc/2::INFO::2022-04-21 11:29:06,047::api::211::root::(setupNetworks) Setting up network according to configuration: networks:{u'ovirtmgmt': {'remove': True}}, bondings:{u'bond0': {'remove': True}}, options:{'connectivityCheck': 0, 'inRollback': True}
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,061::cmdutils::133::root::(exec_cmd) /sbin/tc qdisc show (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,079::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,153::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,153::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,190::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::INFO::2022-04-21 11:29:06,326::netconfpersistence::58::root::(setNetwork) Adding network ovirtmgmt({'bonding': u'bond0', 'ipv6autoconf': True, 'nameservers': [], 'vlan': 2306, 'ipaddr': '10.119.6.237', 'switch': 'ovs', 'mtu': 9000, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': False, 'bridged': True, 'defaultRoute': False, 'bootproto': 'none'})
MainProcess|jsonrpc/2::INFO::2022-04-21 11:29:06,327::netconfpersistence::69::root::(setBonding) Adding bond0({'nics': ['eno1', 'eno2'], 'switch': 'legacy', 'options': 'mode=4'})
MainProcess|jsonrpc/2::INFO::2022-04-21 11:29:06,327::netconfpersistence::63::root::(removeNetwork) Removing network ovirtmgmt
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,334::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-95 /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,911::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::INFO::2022-04-21 11:29:06,911::hooks::114::root::(_runHooksDir) /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe: rc=0 err=
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,918::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,918::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:06,956::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:07,091::configurator::265::root::(_remove_networks) Removing networks: [u'ovirtmgmt']
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:07,219::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- set Interface bond0 mtu_request=1500
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:07,220::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- set Interface bond0 mtu_request=1500 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:07,425::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:07,426::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- del-port ovirtmgmt -- del-port bond0 -- del-br vdsmbr_TtLDFxN1
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:07,426::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- del-port ovirtmgmt -- del-port bond0 -- del-br vdsmbr_TtLDFxN1 (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:07,949::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:07,950::netconfpersistence::65::root::(removeNetwork) Network ovirtmgmt not found for removal
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:07,951::setup::41::root::(remove_bonds) Removing bonds: [u'bond0']
MainProcess|jsonrpc/2::INFO::2022-04-21 11:29:08,165::sysfs_driver::69::root::(destroy) Bond bond0 has been destroyed.
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,166::netconfpersistence::76::root::(removeBonding) bond0 not found for removal
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,172::ifacquire::70::root::(acquire) Acquiring ifaces: set([])
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,172::setup::61::root::(edit_bonds) Editing bonds: []
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,172::setup::92::root::(add_bonds) Creating bonds: []
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,172::configurator::274::root::(_add_networks) Adding networks: []
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,172::ifacquire::70::root::(acquire) Acquiring ifaces: set([])
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,173::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,173::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- list Bridge -- list Port -- list Interface (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,211::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,217::vsctl::68::root::(commit) Executing commands: /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- set open . external-ids:ovn-bridge-mappings=""
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,217::cmdutils::133::root::(exec_cmd) /usr/bin/ovs-vsctl --timeout=5 --oneline --format=json -- set open . 'external-ids:ovn-bridge-mappings=""' (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,247::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::INFO::2022-04-21 11:29:08,249::netconfpersistence::231::root::(_clearDisk) Clearing netconf: /var/lib/vdsm/staging/netconf
MainProcess|jsonrpc/2::INFO::2022-04-21 11:29:08,253::netconfpersistence::181::root::(save) Saved new config RunningConfig({}, {}, {}) to [/var/lib/vdsm/staging/netconf/nets,/var/lib/vdsm/staging/netconf/bonds,/var/lib/vdsm/staging/netconf/devices]
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,256::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-95 /usr/libexec/vdsm/hooks/after_network_setup/30_ethtool_options (cwd None)
MainProcess|jsonrpc/2::DEBUG::2022-04-21 11:29:08,434::commands::219::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
MainProcess|jsonrpc/2::INFO::2022-04-21 11:29:08,434::hooks::114::root::(_runHooksDir) /usr/libexec/vdsm/hooks/after_network_setup/30_ethtool_options: rc=0 err=
MainProcess|jsonrpc/2::ERROR::2022-04-21 11:29:08,436::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in setupNetworks
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 101, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 228, in setupNetworks
_setup_networks(networks, bondings, options, net_info)
File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 249, in _setup_networks
networks, bondings, options, net_info, in_rollback)
File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 157, in _rollback
six.reraise(excType, value, tb)
File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 140, in _rollback
yield
File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 249, in _setup_networks
networks, bondings, options, net_info, in_rollback)
File "/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 139, in setup
_setup(networks, bondings, options, in_rollback, net_info)
File "/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 159, in _setup
_setup_ovs(ovs_nets, ovs_bonds, options, net_info, in_rollback)
File "/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 249, in _setup_ovs
connectivity.check(options)
File "/usr/lib/python2.7/site-packages/vdsm/network/netconfpersistence.py", line 295, in __exit__
raise ne.RollbackIncomplete(config_diff, ex_type, ex_value)
ConfigNetworkError: (10, 'connectivity check failed')
MainProcess|jsonrpc/5::DEBUG::2022-04-21 12:01:20,062::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call network_caps with () {}
MainProcess|jsonrpc/5::DEBUG::2022-04-21 12:01:20,062::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 1935 seconds (max pending: 3)
MainProcess|jsonrpc/5::DEBUG::2022-04-21 12:01:20,076::cmdutils::133::root::(exec_cmd) /sbin/tc qdisc show (cwd None)
MainProcess|jsonrpc/5::DEBUG::2022-04-21 12:01:20,098::cmdutils::141::root::(exec_cmd) SUCCESS: <err> = ''; <rc> = 0
Regards,
Ravi
2
5
Hi,
After the new update, my deployment fails at engine check.
What can I do to debug?
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Check if Engine health page is up]
[ ERROR ] fatal: [localhost -> 192.168.222.12]: FAILED! => {"attempts": 30, "changed": false, "connection": "close", "content": "<html><head><title>Error</title></head><body>500 - Internal Server Error</body></html>", "content_encoding": "identity", "content_length": "86", "content_type": "text/html; charset=UTF-8", "date": "Fri, 22 Apr 2022 16:02:04 GMT", "elapsed": 0, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "server": "Apache/2.4.37 (centos) OpenSSL/1.1.1k mod_auth_gssapi/1.6.1 mod_wsgi/4.6.4 Python/3.6", "status": 500, "url": "http://localhost/ovirt-engine/services/health"}
[ INFO ] TASK [ovirt.ovirt.engine_setup : Clean temporary files]
[ INFO ] changed: [localhost -> 192.168.222.12]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost -> 192.168.222.12]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "There was a failure deploying the engine on the local engine VM. The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
3
7

[IMPORTANT] Upgrade to postgresql-jdbc-42.2.14-1 breaks oVirt Engine 4.4/4.5
by Martin Perina 13 May '22
by Martin Perina 13 May '22
13 May '22
Hi,
Unfortunately we have just found that latest release of
postgresql-jdbc-42.2.14-1 breaks existing oVirt Engine 4.4 and 4.5
installations running on CentOS Stream.
The workaround is to downgrade to previous version, for example
postgresql-jdbc-42.2.3-3 should work fine.
Here are detailed instructions:
1. If you have already upgraded to postgresql-jdbc-42.2.14-1, please
downgrade to previous version:
$ dnf downgrade postgresql-jdbc
$ systemctl restart ovirt-engine
2. If you are going to upgrade your oVirt Engine machine, please exclude
postgresql-jdbc package from upgrades:
$ dnf update -x postgresql-jdbc
We have created https://bugzilla.redhat.com/2077794 to track this issue,
but unfortunately we don't have a fix yet.
Regards,
Martin
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
5
8
After upgrade to 4.5 host cannot be activated because cannot connect to data domain.
I have a data domain in NFS (master) and a GlusterFS. It complains about the Gluster domain:
The error message for connection node1-teste.acloud.pt:/data1 returned by VDSM was: XML error
# rpm -qa|grep glusterfs*
glusterfs-10.1-1.el8s.x86_64
glusterfs-selinux-2.0.1-1.el8s.noarch
glusterfs-client-xlators-10.1-1.el8s.x86_64
glusterfs-events-10.1-1.el8s.x86_64
libglusterfs0-10.1-1.el8s.x86_64
glusterfs-fuse-10.1-1.el8s.x86_64
glusterfs-server-10.1-1.el8s.x86_64
glusterfs-cli-10.1-1.el8s.x86_64
glusterfs-geo-replication-10.1-1.el8s.x86_64
engine log:
2022-04-27 13:35:16,118+01 ERR OR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-66) [e
be79c6] EVENT_ID: VDS_STORAGES_CONNECTION_FAILED(188), Failed to connect Host NODE1 to the Storage Domains DATA1.
2022-04-27 13:35:16,169+01 ERR OR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-66) [e
be79c6] EVENT_ID: STORAGE_DOMAIN_ ERR OR(996), The error message for connection node1-teste.acloud.pt:/data1 returned by VDSM was: XML error
2022-04-27 13:35:16,170+01 ERR OR [org.ovirt.engine.core.bll.storage.connection.FileStorageHelper] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-66) [ebe79c6
] The connection with details 'node1-teste.acloud.pt:/data1' failed because of error code '4106' and error message is: xml error
vdsm log:
2022-04-27 13:40:07,125+0100 ERROR (jsonrpc/4) [storage.storageServer] Could not connect to storage server (storageServer:92)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 90, in connect_all
con.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 233, in connect
self.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 365, in validate
if not self.volinfo:
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 352, in volinfo
self._volinfo = self._get_gluster_volinfo()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 405, in _get_gluster_volinfo
self._volfileserver)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeInfo
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n <opErrstr />\n <volInfo>\n <volumes>\
n <volume>\n <name>data1</name>\n <id>d7eb2c38-2707-4774-9873-a7303d024669</id>\n <status>1</status>\n <statusStr>Started</statusStr>\n <sn
apshotCount>0</snapshotCount>\n <brickCount>2</brickCount>\n <distCount>2</distCount>\n <replicaCount>1</replicaCount>\n <arbiterCount>0</arbiterCount>
\n <disperseCount>0</disperseCount>\n <redundancyCount>0</redundancyCount>\n <type>0</type>\n <typeStr>Distribute</typeStr>\n <transport>0</tran
sport>\n <bricks>\n <brick uuid="08c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b">node1-teste.acloud.pt:/home/brick1<name>node1-teste.acloud.pt:/home/brick1</name><hostUuid>0
8c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b</hostUuid><isArbiter>0</isArbiter></brick>\n <brick uuid="08c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b">node1-teste.acloud.pt:/brick2<name>nod
e1-teste.acloud.pt:/brick2</name><hostUuid>08c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b</hostUuid><isArbiter>0</isArbiter></brick>\n </bricks>\n <optCount>23</optCount>\n
<options>\n <option>\n <name>nfs.disable</name>\n <value>on</value>\n </option>\n <option>\n <name>transport.addre
ss-family</name>\n <value>inet</value>\n </option>\n <option>\n <name>storage.fips-mode-rchecksum</name>\n <value>on</value>\n
</option>\n <option>\n <name>storage.owner-uid</name>\n <value>36</value>\n </option>\n <option>\n <name>storag
e.owner-gid</name>\n <value>36</value>\n </option>\n <option>\n <name>cluster.min-free-disk</name>\n <value>5%</value>\n
</option>\n <option>\n <name>performance.quick-read</name>\n <value>off</value>\n </option>\n <option>\n <name>perfor
mance.read-ahead</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.io-cache</name>\n <value>off</value>\n
</option>\n <option>\n <name>performance.low-prio-threads</name>\n <value>32</value>\n </option>\n <option>\n <
name>network.remote-dio</name>\n <value>enable</value>\n </option>\n <option>\n <name>cluster.eager-lock</name>\n <value>enable<
/value>\n </option>\n <option>\n <name>cluster.quorum-type</name>\n <value>auto</value>\n </option>\n <option>\n
<name>cluster.server-quorum-type</name>\n <value>server</value>\n </option>\n <option>\n <name>cluster.data-self-heal-algorithm</name>\n
<value>full</value>\n </option>\n <option>\n <name>cluster.locking-scheme</name>\n <value>granular</value>\n </option>
\n <option>\n <name>cluster.shd-wait-qlength</name>\n <value>10000</value>\n </option>\n <option>\n <name>features.shar
d</name>\n <value>off</value>\n </option>\n <option>\n <name>user.cifs</name>\n <value>off</value>\n </option>\n
<option>\n <name>cluster.choose-local</name>\n <value>off</value>\n </option>\n <option>\n <name>client.event-threads</name>\
n <value>4</value>\n </option>\n <option>\n <name>server.event-threads</name>\n <value>4</value>\n </option>\n
<option>\n <name>performance.client-io-threads</name>\n <value>on</value>\n </option>\n </options>\n </volume>\n <count>1</count>\
n </volumes>\n </volInfo>\n</cliOutput>']
2022-04-27 13:40:07,125+0100 INFO (jsonrpc/4) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-27 13:40:07,125+0100 INFO (jsonrpc/4) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': 'dede3145-651a-4b01-b8d2-82bff8670696', 'status': 4106}]} from=
::ffff:192.168.5.165,42132, flow_id=4c170005, task_id=cec6f36f-46a4-462c-9d0a-feb8d814b465 (api:54)
2022-04-27 13:40:07,410+0100 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:07,411+0100 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:192.168.5.1
65,42132 (api:54)
2022-04-27 13:40:07,785+0100 INFO (jsonrpc/7) [api.host] START getStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.5.165,42132, task_id=4fa4e8c4-7c65-499a-827e-8ae153aa875e (api:48)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] FINISH repoStats return={} from=::ffff:192.168.5.165,42132, task_id=4fa4e8c4-7c65-499a-827e-8ae153aa875e (api:54)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] START multipath_health() from=::ffff:192.168.5.165,42132, task_id=c6390f2a-845b-420b-a833-475605a24078 (api:48)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.5.165,42132, task_id=c6390f2a-845b-420b-a833-475605a24078 (api:54)
2022-04-27 13:40:07,802+0100 INFO (jsonrpc/7) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:192.168.5.165,42132 (
api:54)
2022-04-27 13:40:11,980+0100 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::1,37040 (api:48)
2022-04-27 13:40:11,980+0100 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::1,37040 (api:54)
2022-04-27 13:40:12,365+0100 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=f5084096-e5c5-4ca8-9c47-a92fa5790484 (api:48)
2022-04-27 13:40:12,365+0100 INFO (periodic/2) [vdsm.api] FINISH repoStats return={} from=internal, task_id=f5084096-e5c5-4ca8-9c47-a92fa5790484 (api:54)
2022-04-27 13:40:22,417+0100 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:22,417+0100 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:192.168.5.1
65,42132 (api:54)
2022-04-27 13:40:22,805+0100 INFO (jsonrpc/1) [api.host] START getStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.5.165,42132, task_id=a9fb939c-ea1a-4116-a22f-d14a99e6eada (api:48)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] FINISH repoStats return={} from=::ffff:192.168.5.165,42132, task_id=a9fb939c-ea1a-4116-a22f-d14a99e6eada (api:54)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] START multipath_health() from=::ffff:192.168.5.165,42132, task_id=5eee2f63-2631-446a-98dd-4947f9499f8f (api:48)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.5.165,42132, task_id=5eee2f63-2631-446a-98dd-4947f9499f8f (api:54)
2022-04-27 13:40:22,822+0100 INFO (jsonrpc/1) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:192.168.5.165,42132 (
api:54)
--
Jose Ferradeira
http://www.logicworks.pt
4
7

HELP ME! Failed to validate the SSL certificate for localhost:443
by natchawi28@gmail.com 06 May '22
by natchawi28@gmail.com 06 May '22
06 May '22
Hi,
I'm having an issue with Failed to validate the SSL certificate for localhost:443 during Deploy "hosted-engine."
check the log files, found this error:
"
[TASK] [ovirt.engine-setup: Check if Engine health page is up]
[ERROR] fatal:[localhost -> my_fqdn.domain]:FAILED!=>{"attempts":12, "changed":false,"msg":"Failed to validate the SSL certificate for localhost:443.
Make sure your managed systems have a valid CA certificate installed.
You can use validate_certs=False if you do not need to confirm the servers identity but this is unsafe and not recommended.
Paths checked for this patform: /etc/ssl/certs, /etc/pki/ca-trust/extracted/pem, /etc/pki/tls/certs, /usr/share/ca-certificates/cacert.org, /etc/ansible.
The exception msg was: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:618).", "status": -1, "url": "http://localhost/ovirt-engine/services/health"}
"
NOTE: ovirt engine VERSION "ovirt-engine-appliance-4.3-20200603.1.0.2.el7.x86_64"
We tried googling to resolve this issue but, unfortunately, unsuccessfully.
Can someone help us to solve our critical issue?
Best regards,
Bunnatee
4
10
Hello,
I downloaded the oVirt node and installed it on my server. DNS etc. I made the settings. I'm getting an error in the Hosted engine installation step.
The error message I got is as follows:
[ INFO ] TASK [ovirt.ovirt.engine_setup : Run engine-config]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Restart engine after engine-config]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Check if Engine health page is up]
[ ERROR ] fatal: [localhost -> 192.168.2.94]: FAILED! => {"attempts": 30, "changed": false, "connection": "close", "content_encoding": "identity", "content_length": "86", "content_type": "text/html; charset= UTF-8", "date": "Fri, 29 Apr 2022 02:52:21 GMT", "elapsed": 0, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "server": "Apache/2.4.37 (centos) OpenSSL/1.1.1k mod_auth_gssapi/1.6.1", "status": 500, "url": "http://localhost /ovirt-engine/services/health"}
[ INFO ] TASK [ovirt.ovirt.engine_setup : Clean temporary files]
[ INFO ] changed: [localhost -> 192.168.2.94]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost -> 192.168.2.94]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Change ownership of copied engine logs]
[ INFO ] changed: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "There was a failure deploying the engine on the local engine VM. The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
How can I fix this. thanks
5
8
Is it possible to setup a hosted engine using the OVS switch type instead of Legacy? If it's not possible to start out as OVS, instructions for switching from Legacy to OVS after the fact would be greatly appreciated.
Many thanks,
Devin
4
5
Recently our host and ovirt engine certificates expired and with some ideas from Strahil we were able to get 2 of the 3 ovirt hosts updated with usable certificates and move all of our VMs to those two nodes.
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/QCFPKQ3OKPOUV2…
Not having any luck with the last host we figured we'd just try to remove it from ovirt engine and re-add it. While it seems `hosted-engine --vm-status` on one node no longer shows the removed host, the other good host and the web interface still show ovirt-1 in the mix. What is the best way to remove a NonRespnsive host from ovirt and re-add it?
[root@ovirt-1 ~]# hosted-engine --vm-status
The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
[root@ovirt-2 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host ovirt-3.xxxxx.com (id: 2) status ==--
Host ID : 2
Host timestamp : 12515451
Score : 3274
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : ovirt-3.xxxxx.com
Local maintenance : False
stopped : False
crc32 : 9cf92792
conf_on_shared_storage : True
local_conf_timestamp : 12515451
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=12515451 (Mon Apr 25 14:08:51 2022)
host-id=2
score=3274
vm_conf_refresh_time=12515451 (Mon Apr 25 14:08:51 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host ovirt-2.xxxxx.com (id: 3) status ==--
Host ID : 3
Host timestamp : 12513269
Score : 3400
Engine status : {"vm": "up", "health": "good", "detail": "Up"}
Hostname : ovirt-2.xxxxx.com
Local maintenance : False
stopped : False
crc32 : 4a89d706
conf_on_shared_storage : True
local_conf_timestamp : 12513269
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=12513269 (Mon Apr 25 14:09:00 2022)
host-id=3
score=3400
vm_conf_refresh_time=12513269 (Mon Apr 25 14:09:00 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
[root@ovirt-3 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host ovirt-1.xxxxx.com (id: 1) status ==--
Host ID : 1
Host timestamp : 6750990
Score : 0
Engine status : unknown stale-data
Hostname : ovirt-1.xxxxx.com
Local maintenance : False
stopped : True
crc32 : 5290657b
conf_on_shared_storage : True
local_conf_timestamp : 6750950
Status up-to-date : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=6750990 (Thu Feb 17 22:17:53 2022)
host-id=1
score=0
vm_conf_refresh_time=6750950 (Thu Feb 17 22:17:12 2022)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True
--== Host ovirt-3.xxxxx.com (id: 2) status ==--
Host ID : 2
Host timestamp : 12515501
Score : 3279
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : ovirt-3.xxxxx.com
Local maintenance : False
stopped : False
crc32 : 0845cd93
conf_on_shared_storage : True
local_conf_timestamp : 12515501
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=12515501 (Mon Apr 25 14:09:42 2022)
host-id=2
score=3279
vm_conf_refresh_time=12515501 (Mon Apr 25 14:09:42 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host ovirt-2.xxxxx.com (id: 3) status ==--
Host ID : 3
Host timestamp : 12513309
Score : 3400
Engine status : {"vm": "up", "health": "good", "detail": "Up"}
Hostname : ovirt-2.xxxxx.com
Local maintenance : False
stopped : False
crc32 : 237726e0
conf_on_shared_storage : True
local_conf_timestamp : 12513309
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=12513309 (Mon Apr 25 14:09:39 2022)
host-id=3
score=3400
vm_conf_refresh_time=12513309 (Mon Apr 25 14:09:39 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
2
18
Hey All,
I am having an issue upgrading from 4.4 to 4.5.
My setup
3 Node Gluster (Cluster 1) + 3 Node Cluster (Cluster 2)
If i recall the process correctly, the process I did last week:
On all my Nodes:
dnf install -y centos-release-ovirt45 --enablerepo=extras
On Ovirt Engine:
dnf install -y centos-release-ovirt45
dnf update -y --nobest
engine-setup
Once the engine was upgraded successfully I ran the upgrade from the GUI on the Cluster 2 Nodes one by one although when they came back, they complained of "Host failed to attach one of the Storage Domains attached to it." which is the "hosted_storage", "data" (gluster).
I thought maybe its due to the fact that 4.5 brings an update to the glusterfs version, so I decided to upgrade Node 3 in my Gluster Cluster and it booted to emergency mode after the install "succeeded".
I feel like I did something wrong, aside from my bravery of upgrading so much before realizing somethings not right.
My VDSM Logs from one of the nodes that fails to connect to storage (FYI I have 2 Networks, one for Mgmt and 1 for storage that are up):
[root@ovirt-4 ~]# tail -f /var/log/vdsm/vdsm.log
2022-04-25 22:41:31,584-0600 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, task_id=8370855e-dea6-4168-870a-d6235d9044e9 (api:54)
2022-04-25 22:41:31,584-0600 INFO (jsonrpc/3) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, task_id=14eb199a-7fbf-4638-a6bf-a384dfbb9d2c (api:48)
2022-04-25 22:41:31,584-0600 INFO (jsonrpc/3) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, task_id=14eb199a-7fbf-4638-a6bf-a384dfbb9d2c (api:54)
2022-04-25 22:41:31,602-0600 INFO (periodic/1) [vdsm.api] START repoStats(domains=()) from=internal, task_id=08a5c00b-1f66-493f-a408-d4006ddaa959 (api:48)
2022-04-25 22:41:31,603-0600 INFO (periodic/1) [vdsm.api] FINISH repoStats return={} from=internal, task_id=08a5c00b-1f66-493f-a408-d4006ddaa959 (api:54)
2022-04-25 22:41:31,606-0600 INFO (jsonrpc/3) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:35,393-0600 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:35,393-0600 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:39,366-0600 INFO (jsonrpc/2) [api.host] START getAllVmStats() from=::1,53634 (api:48)
2022-04-25 22:41:39,366-0600 INFO (jsonrpc/2) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::1,53634 (api:54)
2022-04-25 22:41:46,530-0600 INFO (jsonrpc/1) [api.host] START getStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:46,568-0600 INFO (jsonrpc/1) [vdsm.api] START repoStats(domains=()) from=::ffff:172.17.117.80,38712, task_id=30404767-9761-4f8c-884a-5561dd0d82fe (api:48)
2022-04-25 22:41:46,568-0600 INFO (jsonrpc/1) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, task_id=30404767-9761-4f8c-884a-5561dd0d82fe (api:54)
2022-04-25 22:41:46,569-0600 INFO (jsonrpc/1) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, task_id=8dbfa47f-e1b7-408c-a060-8d45012f0b90 (api:48)
2022-04-25 22:41:46,569-0600 INFO (jsonrpc/1) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, task_id=8dbfa47f-e1b7-408c-a060-8d45012f0b90 (api:54)
2022-04-25 22:41:46,574-0600 INFO (jsonrpc/1) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:46,651-0600 INFO (periodic/0) [vdsm.api] START repoStats(domains=()) from=internal, task_id=92c69020-d0b1-4813-8610-3f3e1892c20b (api:48)
2022-04-25 22:41:46,652-0600 INFO (periodic/0) [vdsm.api] FINISH repoStats return={} from=internal, task_id=92c69020-d0b1-4813-8610-3f3e1892c20b (api:54)
2022-04-25 22:41:50,397-0600 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:50,397-0600 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:52,533-0600 INFO (jsonrpc/4) [api.host] START getCapabilities() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:54,382-0600 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::1,53634 (api:48)
2022-04-25 22:41:54,382-0600 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::1,53634 (api:54)
2022-04-25 22:41:55,037-0600 INFO (jsonrpc/4) [root] /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=b'' (hooks:123)
2022-04-25 22:41:55,039-0600 INFO (jsonrpc/4) [api.host] FINISH getCapabilities return={'status': {'code': 0, 'message': 'Done'}, 'info': {'kvmEnabled': 'true', 'cpuCores': '6', 'cpuThreads': '12', 'cpuSockets': '1', 'onlineCpus': '0,1,2,3,4,5,6,7,8,9,10,11', 'cpuTopology': [{'cpu_id': 0, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 0}, {'cpu_id': 1, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 1}, {'cpu_id': 2, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 2}, {'cpu_id': 3, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 3}, {'cpu_id': 4, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 4}, {'cpu_id': 5, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 5}, {'cpu_id': 6, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 0}, {'cpu_id': 7, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 1}, {'cpu_id': 8, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 2}, {'cpu_id': 9, 'numa_cell_id'
: 0, 'socket_id': 0, 'die_id': 0, 'core_id': 3}, {'cpu_id': 10, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 4}, {'cpu_id': 11, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 5}], 'cpuSpeed': '2500.000', 'cpuModel': 'Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz', 'cpuFlags': 'pdcm,xsaveopt,dtes64,xtpr,clflush,de,ibpb,popcnt,cpuid,ida,monitor,amd-stibp,x2apic,lm,arat,pse36,tsc_deadline_timer,fxsr,ht,skip-l1dfl-vmentry,est,pcid,aperfmperf,nopl,apic,mce,xsave,ibrs,flush_l1d,dtherm,dts,flexpriority,pse,pdpe1gb,pni,sse2,pge,cx16,pschange-mc-no,bts,rdtscp,dca,avx,hypervisor,tsc,tsc_adjust,nx,mmx,pebs,ss,umip,xtopology,vnmi,arch-capabilities,pae,pclmulqdq,tm,aes,invtsc,md_clear,ssse3,amd-ssbd,ssbd,sse4_1,smx,rep_good,vmx,cx8,sse,arch_perfmon,msr,stibp,nonstop_tsc,pti,ds_cpl,mca,cmov,md-clear,fpu,lahf_lm,tm2,sep,tpr_shadow,constant_tsc,pbe,pat,syscall,sse4_2,pln,acpi,mtrr,pts,vme,ept,vpid,spec_ctrl,model_pentium,model_Nehalem,model_486,model_SandyBridge,model_pentium2,
model_Opteron_G1,model_Nehalem-IBRS,model_qemu32,model_kvm32,model_coreduo,model_Westmere,model_SandyBridge-IBRS,model_Westmere-IBRS,model_Penryn,model_pentium3,model_qemu64,model_Conroe,model_kvm64,model_core2duo,model_Opteron_G2', 'vdsmToCpusAffinity': [1], 'version_name': 'Snow Man', 'software_version': '4.50.0.13', 'software_revision': '1', 'supportedENGINEs': ['4.2', '4.3', '4.4', '4.5'], 'clusterLevels': ['4.2', '4.3', '4.4', '4.5', '4.6', '4.7'], 'networks': {'ovirtmgmt': {'ports': ['bond0'], 'stp': 'off', 'iface': 'ovirtmgmt', 'bridged': True, 'addr': '172.17.117.74', 'netmask': '255.255.255.0', 'ipv4addrs': ['172.17.117.74/24'], 'ipv6addrs': [], 'ipv6autoconf': True, 'gateway': '172.17.117.1', 'ipv6gateway': '::', 'ipv4defaultroute': True, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0', 'dhcpv4': False, 'dhcpv6': True}, 'LabNet-v106': {'ports': ['bond0.106'], 'stp': 'off', 'iface': 'LabNet-v106', 'bridged': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6ad
drs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.106', 'vlanid': 106, 'dhcpv4': False, 'dhcpv6': False}, 'PIP_V991': {'ports': ['bond0.991'], 'stp': 'off', 'iface': 'PIP_V991', 'bridged': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.991', 'vlanid': 991, 'dhcpv4': False, 'dhcpv6': False}, 'NetEng-V3101': {'ports': ['bond0.3101'], 'stp': 'off', 'iface': 'NetEng-V3101', 'bridged': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.3101', 'vlanid': 3101, 'dhcpv4': False, 'dhcpv6': False}, 'OVIRT-VMs': {'ports': ['bond0.177'], 'stp': 'off', 'iface': 'OVIRT-VMs', 'bridged
': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.177', 'vlanid': 177, 'dhcpv4': False, 'dhcpv6': False}, 'Gluster_Net': {'ports': ['bond1'], 'stp': 'off', 'iface': 'Gluster_Net', 'bridged': True, 'addr': '172.17.181.13', 'netmask': '255.255.255.0', 'ipv4addrs': ['172.17.181.13/24'], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '172.17.181.1', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond1', 'dhcpv4': False, 'dhcpv6': False}}, 'bondings': {'bond0': {'hwaddr': 'c8:1f:66:f6:e5:48', 'slaves': ['eno1', 'eno2'], 'active_slave': '', 'opts': {'mode': '4', 'xmit_hash_policy': '2'}, 'ad_aggregator_id': '1', 'ad_partner_mac': '4c:4e:35:26:2c:00', 'switch': 'legacy', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'm
tu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'speed': 2000}, 'bond1': {'hwaddr': 'c8:1f:66:f6:e5:4a', 'slaves': ['eno4', 'eno3'], 'active_slave': '', 'opts': {'mode': '4', 'xmit_hash_policy': '2'}, 'ad_aggregator_id': '1', 'ad_partner_mac': '4c:4e:35:26:2c:00', 'switch': 'legacy', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'speed': 2000}}, 'bridges': {'Gluster_Net': {'ports': ['bond1'], 'stp': 'off', 'addr': '172.17.181.13', 'ipv4addrs': ['172.17.181.13/24'], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '172.17.181.1', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '148', 'hello_time': '200', 'multicast_router': '1', 'nf_call_ip
tables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e54a', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e54a', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_
stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'ovirtmgmt': {'ports': ['bond0'], 'stp': 'off', 'addr': '172.17.117.74', 'ipv4addrs': ['172.17.117.74/24'], 'ipv6addrs': [], 'ipv6autoconf': True, 'gateway': '172.17.117.1', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4defaultroute': True, 'dhcpv4': False, 'dhcpv6': True, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '148', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_r
esponse_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'LabNet-v106': {'ports': ['bond0.106'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '1
8247', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_i
nterval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'NetEng-V3101': {'ports': ['bond0.3101'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '6208', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0'
, 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'PIP_V991': {'ports': ['bond0.991'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0
', 'gc_timer': '15686', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multi
cast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'OVIRT-VMs': {'ports': ['bond0.177'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '20761', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats
_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}}, 'nics': {'eno3': {'hwaddr': 'c8:1f:66:f6:e5:4a', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:4a', 'dhcpv4': False, 'dhcpv6':
False, 'speed': 1000}, 'eno4': {'hwaddr': 'c8:1f:66:f6:e5:4a', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:4b', 'dhcpv4': False, 'dhcpv6': False, 'speed': 1000}, 'eno1': {'hwaddr': 'c8:1f:66:f6:e5:48', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:48', 'dhcpv4': False, 'dhcpv6': False, 'speed': 1000}, 'eno2': {'hwaddr': 'c8:1f:66:f6:e5:48', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:49', 'dhcpv4': False, 'dhcpv6': False, 'speed': 1000}}, 'vlans': {'bond0.106': {'iface': 'bo
nd0', 'vlanid': 106, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}, 'bond0.991': {'iface': 'bond0', 'vlanid': 991, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}, 'bond0.177': {'iface': 'bond0', 'vlanid': 177, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}, 'bond0.3101': {'iface': 'bond0', 'vlanid': 3101, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}}, 'nameservers': ['8.8.8.8'], 'supportsIPv6':
True, 'ovnConfigured': False, 'hooks': {'before_vm_start': {'50_hostedengine': {'checksum': 'e5f5262cf22e06cd34e227afb27647e479351266876019a64210dbcbd2a43830'}}, 'after_get_caps': {'ovirt_provider_ovn_hook': {'checksum': 'a2bdefca38b96c8ddab39822cc8282bf3f67d875c4879003ffc9661826c92421'}}, 'before_device_create': {'10_ovirt_provider_ovn_hook': {'checksum': 'b9d67afb41bd86a521ada2121e3505297b3b6dfd478275ce0bd9501fcda6dafc'}}, 'before_nic_hotplug': {'10_ovirt_provider_ovn_hook': {'checksum': 'b9d67afb41bd86a521ada2121e3505297b3b6dfd478275ce0bd9501fcda6dafc'}}}, 'operatingSystem': {'release': '1.el8', 'version': '8.6.2203.0', 'name': 'RHEL', 'pretty_name': 'oVirt Node 4.5.0'}, 'uuid': '4c4c4544-0053-5810-8052-c8c04f563132', 'packages2': {'kernel': {'version': '4.18.0', 'release': '373.el8.x86_64'}, 'glusterfs-cli': {'version': '10.1', 'release': '1.el8s'}, 'librbd1': {'version': '16.2.7', 'release': '1.el8s'}, 'libvirt': {'version': '8.0.0', 'release': '2.module_el8.6.0+1087+b42c8331'}
, 'mom': {'version': '0.6.2', 'release': '1.el8'}, 'ovirt-hosted-engine-ha': {'version': '2.5.0', 'release': '1.el8'}, 'openvswitch': {'version': '2.15', 'release': '3.el8'}, 'nmstate': {'version': '1.2.1', 'release': '1.el8'}, 'qemu-img': {'version': '6.2.0', 'release': '5.module_el8.6.0+1087+b42c8331'}, 'qemu-kvm': {'version': '6.2.0', 'release': '5.module_el8.6.0+1087+b42c8331'}, 'spice-server': {'version': '0.14.3', 'release': '4.el8'}, 'vdsm': {'version': '4.50.0.13', 'release': '1.el8'}, 'glusterfs': {'version': '10.1', 'release': '1.el8s'}, 'glusterfs-fuse': {'version': '10.1', 'release': '1.el8s'}, 'glusterfs-geo-replication': {'version': '10.1', 'release': '1.el8s'}, 'glusterfs-server': {'version': '10.1', 'release': '1.el8s'}}, 'realtimeKernel': False, 'kernelArgs': 'BOOT_IMAGE=(hd0,msdos1)//ovirt-node-ng-4.5.0-0.20220420.0+1/vmlinuz-4.18.0-373.el8.x86_64 crashkernel=auto resume=/dev/mapper/onn-swap rd.lvm.lv=onn/ovirt-node-ng-4.5.0-0.20220420.0+1 rd.lvm.lv=onn/swap rhgb q
uiet kvm-intel.nested=1 root=/dev/onn/ovirt-node-ng-4.5.0-0.20220420.0+1 boot=UUID=adb2035d-5047-471d-8b51-206e0afb39f4 rootflags=discard img.bootid=ovirt-node-ng-4.5.0-0.20220420.0+1', 'nestedVirtualization': True, 'emulatedMachines': ['pc-q35-rhel8.6.0', 'pc-i440fx-rhel7.1.0', 'pc-q35-rhel8.2.0', 'pc-q35-rhel7.6.0', 'pc-i440fx-rhel7.3.0', 'pc-i440fx-rhel7.6.0', 'pc-q35-rhel8.5.0', 'pc-q35-rhel8.0.0', 'pc-i440fx-rhel7.2.0', 'pc', 'pc-q35-rhel7.3.0', 'pc-i440fx-rhel7.4.0', 'q35', 'pc-i440fx-2.11', 'pc-q35-rhel7.4.0', 'pc-i440fx-rhel7.5.0', 'pc-i440fx-rhel7.0.0', 'pc-q35-rhel7.5.0', 'pc-i440fx-4.2', 'pc-q35-rhel8.3.0', 'pc-q35-rhel8.1.0', 'pc-q35-rhel8.4.0'], 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:ea2a5da196cb', 'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:ea2a5da196cb'}], 'FC': []}, 'vmTypes': ['kvm'], 'memSize': '43996', 'reservedMem': '321', 'guestOverhead': '65', 'rngSources': ['random', 'hwrng'], 'numaNodes': {'0': {'totalMemory': '43996', 'hugepag
es': {'4': {'totalPages': '11263209'}, '2048': {'totalPages': '0'}, '1048576': {'totalPages': '0'}}, 'cpus': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]}}, 'numaNodeDistance': {'0': [10]}, 'autoNumaBalancing': 2, 'selinux': {'mode': '1'}, 'liveSnapshot': 'true', 'liveMerge': 'true', 'kdumpStatus': 0, 'deferred_preallocation': True, 'hostdevPassthrough': 'false', 'additionalFeatures': ['libgfapi_supported', 'GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'hostedEngineDeployed': False, 'hugepages': [2048, 1048576], 'kernelFeatures': {'SPECTRE_V2': '(Mitigation: Full generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB filling)', 'ITLB_MULTIHIT': '(KVM: Mitigation: VMX disabled)', 'MDS': '(Mitigation: Clear CPU buffers; SMT vulnerable)', 'L1TF': '(Mitigation: PTE Inversion; VMX: conditional cache flushes, SMT vulnerable)', 'SPEC_STORE_BYPASS': '(Mitigation: Speculative Store Bypass disabled via prctl and seccomp)', 'TSX_ASYNC_ABORT': '(Not affec
ted)', 'SPECTRE_V1': '(Mitigation: usercopy/swapgs barriers and __user pointer sanitization)', 'SRBDS': '(Not affected)', 'MELTDOWN': '(Mitigation: PTI)'}, 'vncEncrypted': True, 'backupEnabled': True, 'coldBackupEnabled': True, 'clearBitmapsEnabled': True, 'fipsEnabled': False, 'boot_uuid': 'adb2035d-5047-471d-8b51-206e0afb39f4', 'tscFrequency': '1999999000', 'tscScaling': False, 'connector_info': {'platform': 'x86_64', 'os_type': 'linux', 'ip': None, 'host': 'ovirt-4.[removed].com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ea2a5da196cb', 'do_local_attach': False, 'uuid': '215601b1-e536-4258-ad35-d1f869afa0f8', 'system uuid': '4c4c4544-0053-5810-8052-c8c04f563132', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:00000000-0000-0000-0000-000000000000', 'found_dsc': ''}, 'domain_versions': [0, 2, 3, 4, 5], 'supported_block_size': {'FCP': [512], 'GLUSTERFS': [0, 512, 4096], 'ISCSI': [512], 'LOCALFS': [0, 512, 4096], 'NFS': [512], 'POSIXFS': [512]}, 'cd_change_pdiv': True, 'refres
h_disk_supported': True, 'replicate_extend': True, 'measure_subchain': True, 'measure_active': True, 'mailbox_events': True, 'netConfigDirty': 'False', 'openstack_binding_host_ids': {'OVIRT_PROVIDER_OVN': 'eaa82268-bd08-453f-9953-b4aad4c4c307'}, 'lastClientIface': 'ovirtmgmt'}} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:55,046-0600 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities took more than 1.00 seconds to succeed: 2.51 (__init__:316)
2022-04-25 22:41:55,058-0600 INFO (jsonrpc/7) [api.host] START getHardwareInfo() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:55,058-0600 INFO (jsonrpc/7) [api.host] FINISH getHardwareInfo return={'status': {'code': 0, 'message': 'Done'}, 'info': {'systemManufacturer': ' ', 'systemProductName': ' ', 'systemVersion': '', 'systemSerialNumber': 'HSXRV12', 'systemUUID': '4C4C4544-0053-5810-8052-C8C04F563132', 'systemFamily': ''}} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:55,121-0600 INFO (jsonrpc/3) [api.host] START getStats() from=::ffff:172.17.117.80,38712, flow_id=610b2f2d (api:48)
2022-04-25 22:41:55,160-0600 INFO (jsonrpc/3) [vdsm.api] START repoStats(domains=()) from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=de30e54a-21c6-46ff-a669-230145f93ae2 (api:48)
2022-04-25 22:41:55,160-0600 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=de30e54a-21c6-46ff-a669-230145f93ae2 (api:54)
2022-04-25 22:41:55,161-0600 INFO (jsonrpc/3) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=b751b6d0-bc99-457b-9620-4ffaaaa37c91 (api:48)
2022-04-25 22:41:55,161-0600 INFO (jsonrpc/3) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=b751b6d0-bc99-457b-9620-4ffaaaa37c91 (api:54)
2022-04-25 22:41:55,166-0600 INFO (jsonrpc/3) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712, flow_id=610b2f2d (api:54)
2022-04-25 22:41:55,346-0600 INFO (jsonrpc/5) [vdsm.api] START connectStorageServer(domType=7, spUUID='3115b136-9981-11ec-90ae-00163e6f31f1', conList=[{'password': '********', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 'backup-volfile-servers=gluster-2.[removed].com:gluster-3.[removed].com', 'iqn': '', 'connection': 'gluster-1.[removed].com:/engine', 'ipv6_enabled': 'false', 'id': '98ea63c0-e8c4-4857-8ef7-3cb256d45997', 'user': '', 'tpgt': '1'}]) from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=6a175be8-fdd0-4d2a-8b79-1866aff21159 (api:48)
2022-04-25 22:41:55,478-0600 ERROR (jsonrpc/5) [storage.storageServer] Could not connect to storage server (storageServer:92)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 90, in connect_all
con.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 233, in connect
self.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 365, in validate
if not self.volinfo:
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 352, in volinfo
self._volinfo = self._get_gluster_volinfo()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 405, in _get_gluster_volinfo
self._volfileserver)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeInfo
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n <opErrstr />\n <volInfo>\n <volumes>\n <volume>\n <name>engine</name>\n <id>51bb4ddb-dfbc-4376-85cd-d7070e287946</id>\n <status>1</status>\n <statusStr>Started</statusStr>\n <snapshotCount>0</snapshotCount>\n <brickCount>3</brickCount>\n <distCount>1</distCount>\n <replicaCount>3</replicaCount>\n <arbiterCount>0</arbiterCount>\n <disperseCount>0</disperseCount>\n <redundancyCount>0</redundancyCount>\n <type>2</type>\n <typeStr>Replicate</typeStr>\n <transport>0</transport>\n <bricks>\n <brick uuid="1679e76e-938c-4da4-b7f7-5161c5badcd3">gluster-1.[removed].com:/gluster_bricks/engine/engine<name>gluster-1.[removed].com:/gluster_bricks/engine/engine</name><hostUuid>1679e76e-938c-4da4-b7f7-5161c5badcd3</hostUuid><isArbiter>0</isArb
iter></brick>\n <brick uuid="530faa98-b564-45d5-8495-8e4006a7628c">gluster-2.[removed].com:/gluster_bricks/engine/engine<name>gluster-2.[removed].com:/gluster_bricks/engine/engine</name><hostUuid>530faa98-b564-45d5-8495-8e4006a7628c</hostUuid><isArbiter>0</isArbiter></brick>\n <brick uuid="06c45945-8a79-476b-9a02-483113191e69">gluster-3.[removed].com:/gluster_bricks/engine/engine<name>gluster-3.[removed].com:/gluster_bricks/engine/engine</name><hostUuid>06c45945-8a79-476b-9a02-483113191e69</hostUuid><isArbiter>0</isArbiter></brick>\n </bricks>\n <optCount>31</optCount>\n <options>\n <option>\n <name>cluster.granular-entry-heal</name>\n <value>enable</value>\n </option>\n <option>\n <name>storage.owner-gid</name>\n <value>36</value>\n </option>\n <option>\n <name>storage.owner-uid</name>\n <value>36</value>\n </option>\n
<option>\n <name>cluster.lookup-optimize</name>\n <value>off</value>\n </option>\n <option>\n <name>server.keepalive-count</name>\n <value>5</value>\n </option>\n <option>\n <name>server.keepalive-interval</name>\n <value>2</value>\n </option>\n <option>\n <name>server.keepalive-time</name>\n <value>10</value>\n </option>\n <option>\n <name>server.tcp-user-timeout</name>\n <value>20</value>\n </option>\n <option>\n <name>network.ping-timeout</name>\n <value>30</value>\n </option>\n <option>\n <name>server.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>client.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>clu
ster.choose-local</name>\n <value>off</value>\n </option>\n <option>\n <name>user.cifs</name>\n <value>off</value>\n </option>\n <option>\n <name>features.shard</name>\n <value>on</value>\n </option>\n <option>\n <name>cluster.shd-wait-qlength</name>\n <value>10000</value>\n </option>\n <option>\n <name>cluster.shd-max-threads</name>\n <value>8</value>\n </option>\n <option>\n <name>cluster.locking-scheme</name>\n <value>granular</value>\n </option>\n <option>\n <name>cluster.data-self-heal-algorithm</name>\n <value>full</value>\n </option>\n <option>\n <name>cluster.server-quorum-type</name>\n <value>server</value>\n </option>\n <option>\n <name>cluster.quorum-type</n
ame>\n <value>auto</value>\n </option>\n <option>\n <name>cluster.eager-lock</name>\n <value>enable</value>\n </option>\n <option>\n <name>performance.strict-o-direct</name>\n <value>on</value>\n </option>\n <option>\n <name>network.remote-dio</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.low-prio-threads</name>\n <value>32</value>\n </option>\n <option>\n <name>performance.io-cache</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.read-ahead</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.quick-read</name>\n <value>off</value>\n </option>\n <option>\n <name>storage.fips-mode-rchecksum</name>\n
<value>on</value>\n </option>\n <option>\n <name>transport.address-family</name>\n <value>inet</value>\n </option>\n <option>\n <name>nfs.disable</name>\n <value>on</value>\n </option>\n <option>\n <name>performance.client-io-threads</name>\n <value>on</value>\n </option>\n </options>\n </volume>\n <count>1</count>\n </volumes>\n </volInfo>\n</cliOutput>']
2022-04-25 22:41:55,478-0600 INFO (jsonrpc/5) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-25 22:41:55,478-0600 INFO (jsonrpc/5) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '98ea63c0-e8c4-4857-8ef7-3cb256d45997', 'status': 4106}]} from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=6a175be8-fdd0-4d2a-8b79-1866aff21159 (api:54)
2022-04-25 22:41:55,515-0600 INFO (jsonrpc/2) [vdsm.api] START connectStorageServer(domType=7, spUUID='3115b136-9981-11ec-90ae-00163e6f31f1', conList=[{'password': '********', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 'backup-volfile-servers=gluster-2.[removed].com:gluster-3.[removed].com', 'iqn': '', 'connection': 'gluster-1.[removed].com:/data', 'ipv6_enabled': 'false', 'id': '170e1dda-af02-4ff3-808b-16dc814e750a', 'user': '', 'tpgt': '1'}]) from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=9d0ded6d-e19e-41f8-85b4-52b18bd3380d (api:48)
2022-04-25 22:41:55,647-0600 ERROR (jsonrpc/2) [storage.storageServer] Could not connect to storage server (storageServer:92)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 90, in connect_all
con.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 233, in connect
self.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 365, in validate
if not self.volinfo:
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 352, in volinfo
self._volinfo = self._get_gluster_volinfo()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 405, in _get_gluster_volinfo
self._volfileserver)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeInfo
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n <opErrstr />\n <volInfo>\n <volumes>\n <volume>\n <name>data</name>\n <id>06ce0d34-b4b4-472c-9cec-24ffe934ed05</id>\n <status>1</status>\n <statusStr>Started</statusStr>\n <snapshotCount>0</snapshotCount>\n <brickCount>3</brickCount>\n <distCount>1</distCount>\n <replicaCount>3</replicaCount>\n <arbiterCount>0</arbiterCount>\n <disperseCount>0</disperseCount>\n <redundancyCount>0</redundancyCount>\n <type>2</type>\n <typeStr>Replicate</typeStr>\n <transport>0</transport>\n <bricks>\n <brick uuid="1679e76e-938c-4da4-b7f7-5161c5badcd3">gluster-1.[removed].com:/gluster_bricks/data/data<name>gluster-1.[removed].com:/gluster_bricks/data/data</name><hostUuid>1679e76e-938c-4da4-b7f7-5161c5badcd3</hostUuid><isArbiter>0</isArbiter></bri
ck>\n <brick uuid="530faa98-b564-45d5-8495-8e4006a7628c">gluster-2.[removed].com:/gluster_bricks/data/data<name>gluster-2.[removed].com:/gluster_bricks/data/data</name><hostUuid>530faa98-b564-45d5-8495-8e4006a7628c</hostUuid><isArbiter>0</isArbiter></brick>\n <brick uuid="06c45945-8a79-476b-9a02-483113191e69">gluster-3.[removed].com:/gluster_bricks/data/data<name>gluster-3.[removed].com:/gluster_bricks/data/data</name><hostUuid>06c45945-8a79-476b-9a02-483113191e69</hostUuid><isArbiter>0</isArbiter></brick>\n </bricks>\n <optCount>31</optCount>\n <options>\n <option>\n <name>cluster.granular-entry-heal</name>\n <value>enable</value>\n </option>\n <option>\n <name>storage.owner-gid</name>\n <value>36</value>\n </option>\n <option>\n <name>storage.owner-uid</name>\n <value>36</value>\n </option>\n <option>\n <
name>cluster.lookup-optimize</name>\n <value>off</value>\n </option>\n <option>\n <name>server.keepalive-count</name>\n <value>5</value>\n </option>\n <option>\n <name>server.keepalive-interval</name>\n <value>2</value>\n </option>\n <option>\n <name>server.keepalive-time</name>\n <value>10</value>\n </option>\n <option>\n <name>server.tcp-user-timeout</name>\n <value>20</value>\n </option>\n <option>\n <name>network.ping-timeout</name>\n <value>30</value>\n </option>\n <option>\n <name>server.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>client.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>cluster.choose-local</name>\n
<value>off</value>\n </option>\n <option>\n <name>user.cifs</name>\n <value>off</value>\n </option>\n <option>\n <name>features.shard</name>\n <value>on</value>\n </option>\n <option>\n <name>cluster.shd-wait-qlength</name>\n <value>10000</value>\n </option>\n <option>\n <name>cluster.shd-max-threads</name>\n <value>8</value>\n </option>\n <option>\n <name>cluster.locking-scheme</name>\n <value>granular</value>\n </option>\n <option>\n <name>cluster.data-self-heal-algorithm</name>\n <value>full</value>\n </option>\n <option>\n <name>cluster.server-quorum-type</name>\n <value>server</value>\n </option>\n <option>\n <name>cluster.quorum-type</name>\n <value>a
uto</value>\n </option>\n <option>\n <name>cluster.eager-lock</name>\n <value>enable</value>\n </option>\n <option>\n <name>performance.strict-o-direct</name>\n <value>on</value>\n </option>\n <option>\n <name>network.remote-dio</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.low-prio-threads</name>\n <value>32</value>\n </option>\n <option>\n <name>performance.io-cache</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.read-ahead</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.quick-read</name>\n <value>off</value>\n </option>\n <option>\n <name>storage.fips-mode-rchecksum</name>\n <value>on</value>\n
</option>\n <option>\n <name>transport.address-family</name>\n <value>inet</value>\n </option>\n <option>\n <name>nfs.disable</name>\n <value>on</value>\n </option>\n <option>\n <name>performance.client-io-threads</name>\n <value>on</value>\n </option>\n </options>\n </volume>\n <count>1</count>\n </volumes>\n </volInfo>\n</cliOutput>']
2022-04-25 22:41:55,647-0600 INFO (jsonrpc/2) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-25 22:41:55,647-0600 INFO (jsonrpc/2) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '170e1dda-af02-4ff3-808b-16dc814e750a', 'status': 4106}]} from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=9d0ded6d-e19e-41f8-85b4-52b18bd3380d (api:54)
2022-04-25 22:41:55,682-0600 INFO (jsonrpc/1) [vdsm.api] START connectStorageServer(domType=1, spUUID='3115b136-9981-11ec-90ae-00163e6f31f1', conList=[{'password': '********', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 'ovirt-2.[removed].com:/nfs2/data', 'ipv6_enabled': 'false', 'id': '311165ae-bfbf-4f51-994f-051aef56d94f', 'user': '', 'tpgt': '1'}, {'password': '********', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 'ovirt-1.[removed].com:/nfs1/data', 'ipv6_enabled': 'false', 'id': 'ec52f74c-a041-4e3a-9aae-5f1c6629d77f', 'user': '', 'tpgt': '1'}]) from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=bc135c71-2b43-4627-b3d3-2eb0a4d25227 (api:48)
2022-04-25 22:41:55,688-0600 INFO (jsonrpc/1) [storage.storagedomaincache] Removing domain ddeb67aa-9ec8-488b-9632-5cc19a244815 from storage domain cache (sdc:211)
2022-04-25 22:41:55,689-0600 INFO (jsonrpc/1) [storage.storagedomaincache] Removing domain e1ae9b1a-7aa4-4072-b92e-5e967f5a2ee7 from storage domain cache (sdc:211)
2022-04-25 22:41:55,689-0600 INFO (jsonrpc/1) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-25 22:41:55,689-0600 INFO (jsonrpc/1) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '311165ae-bfbf-4f51-994f-051aef56d94f', 'status': 0}, {'id': 'ec52f74c-a041-4e3a-9aae-5f1c6629d77f', 'status': 0}]} from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=bc135c71-2b43-4627-b3d3-2eb0a4d25227 (api:54)
2022-04-25 22:42:01,696-0600 INFO (periodic/0) [vdsm.api] START repoStats(domains=()) from=internal, task_id=9b42900c-cfd0-4e56-b6f8-07a340497993 (api:48)
2022-04-25 22:42:01,697-0600 INFO (periodic/0) [vdsm.api] FINISH repoStats return={} from=internal, task_id=9b42900c-cfd0-4e56-b6f8-07a340497993 (api:54)
2022-04-25 22:42:04,539-0600 INFO (jsonrpc/6) [api.host] START getStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:42:04,577-0600 INFO (jsonrpc/6) [vdsm.api] START repoStats(domains=()) from=::ffff:172.17.117.80,38712, task_id=a97abe37-ce3b-4745-8cfc-89d0d01bfc74 (api:48)
2022-04-25 22:42:04,577-0600 INFO (jsonrpc/6) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, task_id=a97abe37-ce3b-4745-8cfc-89d0d01bfc74 (api:54)
2022-04-25 22:42:04,578-0600 INFO (jsonrpc/6) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, task_id=b054a35e-e4ea-4402-963b-5b0416a30692 (api:48)
2022-04-25 22:42:04,578-0600 INFO (jsonrpc/6) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, task_id=b054a35e-e4ea-4402-963b-5b0416a30692 (api:54)
2022-04-25 22:42:04,583-0600 INFO (jsonrpc/6) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
7
10
Obviously ens192 is not the good interface. Check where the IP is added and use that address.
Best Regards,Strahil Nikolov
On Fri, Apr 29, 2022 at 16:48, Mohamed Roushdy<mohamedroushdy(a)peopleintouch.com> wrote: <!--#yiv8236219045 _filtered {} _filtered {}#yiv8236219045 #yiv8236219045 p.yiv8236219045MsoNormal, #yiv8236219045 li.yiv8236219045MsoNormal, #yiv8236219045 div.yiv8236219045MsoNormal {margin:0in;font-size:11.0pt;font-family:"Calibri", sans-serif;}#yiv8236219045 span.yiv8236219045EmailStyle17 {font-family:"Calibri", sans-serif;color:windowtext;}#yiv8236219045 .yiv8236219045MsoChpDefault {font-family:"Calibri", sans-serif;} _filtered {}#yiv8236219045 div.yiv8236219045WordSection1 {}-->
Hello,
I’ve researched a bit about this problem, but none of the proposed solutions fixed it. I’m trying to deploy Ovirt 4.5.0.1 in my lab, and the installation fails with the following error:
I’ve even tried to delete the default network bridge (as suggested in some articles), but this didn’t help either. The node has 3 network interfaces, and the hosts file points only to the management interface.
Thank you,
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/N4TY77OAA7QMN…
2
1

01 May '22
Hello,
today i updated from 4.4 to 4.5 and i am no longer able to access my oVirt Cluster. Accessing oVirt Web Interface fails with "500 - Internal Server Error". The API is also dead, My Backup Software and Foreman is no longer able to talk to oVirt.
I rebooted the host an ran engine-setup again, it completed without issues but Engine ist still dead. If i ran it again, it tells me now that my cluster is not in global maintanance mode, but "hosted-engine --vm-status" tells me it is still in maintanance mode.
one suspect thing i found in server.log is this:
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:324)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory.createMetaDataProvider(CallMetaDataProviderFactory.java:70)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataContext.initializeMetaData(CallMetaDataContext.java:252)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.simple.AbstractJdbcCall.compileInternal(AbstractJdbcCall.java:313)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.compileInternal(PostgresDbEngineDialect.java:106)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.simple.AbstractJdbcCall.compile(AbstractJdbcCall.java:296)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.getCall(SimpleJdbcCallsHandler.java:157)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:134)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dao.TagDaoImpl.getAllForParent(TagDaoImpl.java:82)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.addChildren(TagsDirector.java:116)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.init(TagsDirector.java:75)
... 64 more
2022-04-26 12:32:32,129+02 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 60) WFLYUT0021: Registered web context: '/ovirt-engine/sso' for server 'default-server'
2022-04-26 12:32:32,137+02 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "engine.ear")]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.deployment.subunit.\"engine.ear\".\"bll.jar\".component.Backend.START" => "java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
Caused by: javax.ejb.EJBException: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@21c5d21d
Caused by: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@21c5d21d
Caused by: java.lang.reflect.InvocationTargetException
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Unable to determine the correct call signature - no procedure/function/signature for 'gettagsbyparent_id'"}}
2022-04-26 12:32:32,158+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "ovirt-web-ui.war" (runtime-name : "ovirt-web-ui.war")
2022-04-26 12:32:32,159+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "apidoc.war" (runtime-name : "apidoc.war")
2022-04-26 12:32:32,159+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "restapi.war" (runtime-name : "restapi.war")
2022-04-26 12:32:32,159+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "engine.ear" (runtime-name : "engine.ear")
2022-04-26 12:32:32,167+02 INFO [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0183: Service status report
WFLYCTL0186: Services which failed to start: service jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
WFLYCTL0448: 2 additional services are down due to their dependencies being missing or failed
2022-04-26 12:32:32,211+02 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
2022-04-26 12:32:32,226+02 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: WildFly Full 24.0.1.Final (WildFly Core 16.0.1.Final) started (with errors) in 18363ms - Started 1670 of 1890 services (6 services failed or missing dependencies, 393 services are lazy, passive or on-demand)
2022-04-26 12:32:32,230+02 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:8706/management
2022-04-26 12:32:32,231+02 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:8706
Anyone got an idea whats may be the reason? i am a bit lost here.
4
4
Hi Everyone,
In my compagny, we try to deploy engine on 2 RHEL8 hosts we already installed.
We don't have direct internet access, so the RHEL8 hosts have been setup using some internal EL repo (using redhat satellite)
We have also duplicated internaly the necessary ovirt repositories, so all ovirt packages can be installed.
Now the blocking part is the deployment of the engine. Is it really possible to deploy an engine without having internet connection?
We tried several time but never succeeded.
I tried with ansible extra var "he_offline_deployment=true", naively thinking it will download necessary packages for the engine through the repositories already configured on the physical hosts (like the physical host act as proxy)
I also tried by specifying the ova file with he_appliance_ova=/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.4-20211020135049.1.el8.ova
both options have also been tried together (--ansible-extra-vars=he_appliance_ova=/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.4-20211020135049.1.el8.ova --ansible-extra-vars=he_offline_deployment=true)
But at the end, it seems the engine deployment process makes the engine to need to reach the ovirt internet repositories, as it always failed with:
2022-04-24 17:39:53,268+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:110 fatal: [localhost -> 192.168.1.154]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'ovirt-4.4-centos-ceph-pacific': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "rc": 1, "results": []}
FYI the pacific repo works fine when we download packages on physical hosts.
ANother thing to know is that before being able to use our internal repo present on our redhat satellite, a system need to install the satellite crtificate, and register to satellite.
it would be so nice if we can achieve a fully offline engine deploy (which mean no internet access at all, including the engine itself), but we start to lack of clues if it's really possible.
Here are all the ovirt packages installed on the physical hosts:
$rpm -qa | grep ovirt
ovirt-ansible-collection-1.6.5-1.el8.noarch
ovirt-imageio-daemon-2.3.0-1.el8.x86_64
ovirt-host-4.4.9-2.el8.x86_64
ovirt-engine-appliance-4.4-20211020135049.1.el8.x86_64
ovirt-imageio-common-2.3.0-1.el8.x86_64
python3-ovirt-engine-sdk4-4.4.15-1.el8.x86_64
ovirt-host-dependencies-4.4.9-2.el8.x86_64
ovirt-hosted-engine-setup-2.5.4-2.el8.noarch
ovirt-imageio-client-2.3.0-1.el8.x86_64
ovirt-vmconsole-host-1.0.9-1.el8.noarch
ovirt-provider-ovn-driver-1.2.34-1.el8.noarch
cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
python3-ovirt-setup-lib-1.3.2-1.el8.noarch
ovirt-hosted-engine-ha-2.4.9-1.el8.noarch
ovirt-vmconsole-1.0.9-1.el8.noarch
thanks a lot in advance
7
10