
After doing the latest updates on one of my CentOS 8 Stream hosts, it will non longer allow VMs to be migrated to it. VMs can be started on the host, but then have no network access. The hosted engine can be started on the host, but again no network access. I have tried removing the host, rebuilding the OS from scratch and re-adding it, with no change. It shows up and you do not get any error messages that I can tell when a migrate fails, it just shows that it failed. The updated host has the following versions, only KVM and LIBVIRT appear to be different. I tried downgrading KVM and LIBVIRT, but the migration still fails. I have avoided trying up update the other hosts since I cannot migrate any of my VMs to the one host that I already tried to update. RHEL - 8.6 - 1.el8 OS Description: CentOS Stream 8 Kernel Version: 4.18.0 - 348.el8.x86_64 KVM Version: 6.1.0 - 4.module_el8.6.0+983+a7505f3f LIBVIRT Version: libvirt-7.9.0-1.module_el8.6.0+983+a7505f3f VDSM Version: vdsm-4.40.90.4-1.el8 The other two hosts which have not been updated and still work normally have RHEL - 8.6 - 1.el8 OS Description: CentOS Stream 8 Kernel Version: 4.18.0 - 348.el8.x86_64 KVM Version: 6.0.0 - 33.el8s LIBVIRT Version: libvirt-7.6.0-4.el8s VDSM Version: vdsm-4.40.90.4-1.el8 The engine log from an attempted migrations is as follows. 2021-11-17 21:12:46,099-09 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: fa1a2d6b-99cb-42bd-a343-91f314d5f47b Type: VMAction group MIGRATE_VM with role type USER 2021-11-17 21:12:46,134-09 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', srcHost='ravn-kvm-8.ravnalaska.net', dstVdsId='10491335-e2e1-49f2-96c3-79331535542b', dstHost='ravn-kvm-9.ravnalaska.net:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='1250', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.9.24.79'}), log id: 3522dfb9 2021-11-17 21:12:46,134-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] START, MigrateBrokerVDSCommand(HostName = ravn-kvm-8.ravnalaska.net, MigrateVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', srcHost='ravn-kvm-8.ravnalaska.net', dstVdsId='10491335-e2e1-49f2-96c3-79331535542b', dstHost='ravn-kvm-9.ravnalaska.net:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='1250', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.9.24.79'}), log id: 23f98865 2021-11-17 21:12:46,141-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] FINISH, MigrateBrokerVDSCommand, return: , log id: 23f98865 2021-11-17 21:12:46,143-09 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3522dfb9 2021-11-17 21:12:46,149-09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: ns8, Source: ravn-kvm-8.ravnalaska.net, Destination: ravn-kvm-9.ravnalaska.net, User: admin@internal-authz). 2021-11-17 21:12:49,052-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [] Fetched 2 VMs from VDS '10491335-e2e1-49f2-96c3-79331535542b' 2021-11-17 21:12:49,053-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b' is migrating to VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) ignoring it in the refresh until migration is done 2021-11-17 21:12:54,314-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b' was reported as Down on VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) 2021-11-17 21:12:54,314-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) was unexpectedly detected as 'Down' on VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) (expected on '6ee16602-c686-471a-9f65-e5952b813672') 2021-11-17 21:12:54,315-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-21) [] START, DestroyVDSCommand(HostName = ravn-kvm-9.ravnalaska.net, DestroyVmVDSCommandParameters:{hostId='10491335-e2e1-49f2-96c3-79331535542b', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 389ad865 2021-11-17 21:12:54,596-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-21) [] FINISH, DestroyVDSCommand, return: , log id: 389ad865 2021-11-17 21:12:54,596-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) was unexpectedly detected as 'Down' on VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) (expected on '6ee16602-c686-471a-9f65-e5952b813672') 2021-11-17 21:12:54,596-09 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] Migration of VM 'ns8' to host 'ravn-kvm-9.ravnalaska.net' failed: VM destroyed during the startup. 2021-11-17 21:12:54,642-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-5) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) moved from 'MigratingFrom' --> 'Up' 2021-11-17 21:12:54,642-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-5) [] Adding VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) to re-run list 2021-11-17 21:12:54,644-09 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-5) [] Rerun VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'. Called from VDS 'ravn-kvm-8.ravnalaska.net' 2021-11-17 21:12:54,679-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-7232) [] START, MigrateStatusVDSCommand(HostName = ravn-kvm-8.ravnalaska.net, MigrateStatusVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b'}), log id: 7ce5593e 2021-11-17 21:12:54,681-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-7232) [] FINISH, MigrateStatusVDSCommand, return: , log id: 7ce5593e 2021-11-17 21:12:54,695-09 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-7232) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: ns8, Source: ravn-kvm-8.ravnalaska.net, Destination: ravn-kvm-9.ravnalaska.net). 2021-11-17 21:12:54,697-09 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (EE-ManagedThreadFactory-engine-Thread-7232) [] Lock freed to object 'EngineLock:{exclusiveLocks='[fa1a2d6b-99cb-42bd-a343-91f314d5f47b=VM]', sharedLocks=''}' 2021-11-17 21:13:04,061-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-94) [] Fetched 1 VMs from VDS '10491335-e2e1-49f2-96c3-79331535542b' _______________________________ Gary Pedretty Director of IT Ravn Alaska Office: 907-266-8451 Mobile: 907-388-2247 Email: gary.pedretty@ravnalaska.com

Update, downgrading libvirt and kvm seems to have resolved the issue. Is this a known bug with those updates? Gary
On Nov 17, 2021, at 9:59 PM, Gary Pedretty <gary@ravnalaska.net> wrote:
After doing the latest updates on one of my CentOS 8 Stream hosts, it will non longer allow VMs to be migrated to it. VMs can be started on the host, but then have no network access. The hosted engine can be started on the host, but again no network access. I have tried removing the host, rebuilding the OS from scratch and re-adding it, with no change. It shows up and you do not get any error messages that I can tell when a migrate fails, it just shows that it failed.
The updated host has the following versions, only KVM and LIBVIRT appear to be different. I tried downgrading KVM and LIBVIRT, but the migration still fails. I have avoided trying up update the other hosts since I cannot migrate any of my VMs to the one host that I already tried to update.
RHEL - 8.6 - 1.el8 OS Description: CentOS Stream 8 Kernel Version: 4.18.0 - 348.el8.x86_64 KVM Version: 6.1.0 - 4.module_el8.6.0+983+a7505f3f LIBVIRT Version: libvirt-7.9.0-1.module_el8.6.0+983+a7505f3f VDSM Version: vdsm-4.40.90.4-1.el8
The other two hosts which have not been updated and still work normally have
RHEL - 8.6 - 1.el8 OS Description: CentOS Stream 8 Kernel Version: 4.18.0 - 348.el8.x86_64 KVM Version: 6.0.0 - 33.el8s LIBVIRT Version: libvirt-7.6.0-4.el8s VDSM Version: vdsm-4.40.90.4-1.el8
The engine log from an attempted migrations is as follows.
2021-11-17 21:12:46,099-09 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] Running command: MigrateVmToServerCommand internal: false. Entities affected : ID: fa1a2d6b-99cb-42bd-a343-91f314d5f47b Type: VMAction group MIGRATE_VM with role type USER 2021-11-17 21:12:46,134-09 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', srcHost='ravn-kvm-8.ravnalaska.net', dstVdsId='10491335-e2e1-49f2-96c3-79331535542b', dstHost='ravn-kvm-9.ravnalaska.net:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='1250', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.9.24.79'}), log id: 3522dfb9 2021-11-17 21:12:46,134-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] START, MigrateBrokerVDSCommand(HostName = ravn-kvm-8.ravnalaska.net, MigrateVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', srcHost='ravn-kvm-8.ravnalaska.net', dstVdsId='10491335-e2e1-49f2-96c3-79331535542b', dstHost='ravn-kvm-9.ravnalaska.net:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', migrateEncrypted='false', consoleAddress='null', maxBandwidth='1250', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]', dstQemu='10.9.24.79'}), log id: 23f98865 2021-11-17 21:12:46,141-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] FINISH, MigrateBrokerVDSCommand, return: , log id: 23f98865 2021-11-17 21:12:46,143-09 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3522dfb9 2021-11-17 21:12:46,149-09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-21) [c0c87da3-a8aa-4f8e-ab9c-1e2fc05babac] EVENT_ID: VM_MIGRATION_START(62), Migration started (VM: ns8, Source: ravn-kvm-8.ravnalaska.net, Destination: ravn-kvm-9.ravnalaska.net, User: admin@internal-authz). 2021-11-17 21:12:49,052-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [] Fetched 2 VMs from VDS '10491335-e2e1-49f2-96c3-79331535542b' 2021-11-17 21:12:49,053-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-36) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b' is migrating to VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) ignoring it in the refresh until migration is done 2021-11-17 21:12:54,314-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b' was reported as Down on VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) 2021-11-17 21:12:54,314-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) was unexpectedly detected as 'Down' on VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) (expected on '6ee16602-c686-471a-9f65-e5952b813672') 2021-11-17 21:12:54,315-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-21) [] START, DestroyVDSCommand(HostName = ravn-kvm-9.ravnalaska.net, DestroyVmVDSCommandParameters:{hostId='10491335-e2e1-49f2-96c3-79331535542b', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 389ad865 2021-11-17 21:12:54,596-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-21) [] FINISH, DestroyVDSCommand, return: , log id: 389ad865 2021-11-17 21:12:54,596-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) was unexpectedly detected as 'Down' on VDS '10491335-e2e1-49f2-96c3-79331535542b'(ravn-kvm-9.ravnalaska.net) (expected on '6ee16602-c686-471a-9f65-e5952b813672') 2021-11-17 21:12:54,596-09 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-21) [] Migration of VM 'ns8' to host 'ravn-kvm-9.ravnalaska.net' failed: VM destroyed during the startup. 2021-11-17 21:12:54,642-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-5) [] VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) moved from 'MigratingFrom' --> 'Up' 2021-11-17 21:12:54,642-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-5) [] Adding VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'(ns8) to re-run list 2021-11-17 21:12:54,644-09 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-5) [] Rerun VM 'fa1a2d6b-99cb-42bd-a343-91f314d5f47b'. Called from VDS 'ravn-kvm-8.ravnalaska.net' 2021-11-17 21:12:54,679-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-7232) [] START, MigrateStatusVDSCommand(HostName = ravn-kvm-8.ravnalaska.net, MigrateStatusVDSCommandParameters:{hostId='6ee16602-c686-471a-9f65-e5952b813672', vmId='fa1a2d6b-99cb-42bd-a343-91f314d5f47b'}), log id: 7ce5593e 2021-11-17 21:12:54,681-09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-7232) [] FINISH, MigrateStatusVDSCommand, return: , log id: 7ce5593e 2021-11-17 21:12:54,695-09 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-7232) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: ns8, Source: ravn-kvm-8.ravnalaska.net, Destination: ravn-kvm-9.ravnalaska.net). 2021-11-17 21:12:54,697-09 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] (EE-ManagedThreadFactory-engine-Thread-7232) [] Lock freed to object 'EngineLock:{exclusiveLocks='[fa1a2d6b-99cb-42bd-a343-91f314d5f47b=VM]', sharedLocks=''}' 2021-11-17 21:13:04,061-09 INFO [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-94) [] Fetched 1 VMs from VDS '10491335-e2e1-49f2-96c3-79331535542b'
_______________________________ Gary Pedretty Director of IT Ravn Alaska
Office: 907-266-8451 Mobile: 907-388-2247 Email: gary.pedretty@ravnalaska.com
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QKRH7PUV6FC2NN...

We have exactly the same issue. Qemu 6.1 is totally broken or incompatible with libvirt / vdsm. The qemu process is launched properly (no libvirt / vdsm error, and no error in the qemu log file) but it doesn't boot at all, no spice, no network, nothing. Downgrading to 5.2.0 (from 6.1.0) has fixed the problem for us.

It's also working with 6.0.0. So the problem definitely appears on 6.1.0. The latest build https://cbs.centos.org/koji/taskinfo?taskID=2604860 is also impacted.

looks like they are already aware of it: https://lists.ovirt.org/archives/list/devel@ovirt.org/thread/BDYP62MAJL2QVQZ... Am 18.11.21 um 11:10 schrieb Yoann Laissus:
It's also working with 6.0.0. So the problem definitely appears on 6.1.0. The latest build https://cbs.centos.org/koji/taskinfo?taskID=2604860 is also impacted. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IZZTSG6Y3TBVU...

On Thu, Nov 18, 2021 at 2:14 PM Christoph Timm <ovirt@timmi.org> wrote:
looks like they are already aware of it:
Indeed.
https://lists.ovirt.org/archives/list/devel@ovirt.org/thread/BDYP62MAJL2QVQZ...
Now replied there, and created this bug: https://bugzilla.redhat.com/show_bug.cgi?id=2024605 Also posted now to centos-devel, "qemu-kvm 6.1.0 with 16 PCIE root ports is broken". For the time being, we know of two workarounds: 1. Use qemu-kvm 6.0.0, available from the advanced virtualization SIG repo, should automatically be enabled by ovirt-release package. So e.g.: Per host: - Move to maintenance - dnf downgrade qemu-kvm-core-6.0.0 - Activate 2. Configure your engine to use less than 16 pcie root ports, e.g. 12 like here: https://gerrit.ovirt.org/c/ovirt-system-tests/+/117689 This might be problematic, though, if you need to add many devices to your VMs. Best regards, -- Didi

Ah, sounds like the issue I was having with a new install/upgrade as well (https://bugzilla.redhat.com/show_bug.cgi?id=2023919 <https://bugzilla.redhat.com/show_bug.cgi?id=2023919>) It’s definitely affecting stream users and pretty much any new install at the moment.
On Nov 18, 2021, at 7:14 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Thu, Nov 18, 2021 at 2:14 PM Christoph Timm <ovirt@timmi.org> wrote:
looks like they are already aware of it:
Indeed.
https://lists.ovirt.org/archives/list/devel@ovirt.org/thread/BDYP62MAJL2QVQZ...
Now replied there, and created this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=2024605
Also posted now to centos-devel, "qemu-kvm 6.1.0 with 16 PCIE root ports is broken".
For the time being, we know of two workarounds:
1. Use qemu-kvm 6.0.0, available from the advanced virtualization SIG repo, should automatically be enabled by ovirt-release package. So e.g.:
Per host: - Move to maintenance - dnf downgrade qemu-kvm-core-6.0.0 - Activate
2. Configure your engine to use less than 16 pcie root ports, e.g. 12 like here:
https://gerrit.ovirt.org/c/ovirt-system-tests/+/117689
This might be problematic, though, if you need to add many devices to your VMs.
Best regards,
-- Didi _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:

On Fri, Nov 19, 2021 at 10:26 PM Darrell Budic <budic@onholyground.com> wrote:
Ah, sounds like the issue I was having with a new install/upgrade as well ( https://bugzilla.redhat.com/show_bug.cgi?id=2023919)
It’s definitely affecting stream users and pretty much any new install at the moment.
I can confirm that downgrading the qemu packages to 6.0 solves the problem. I managed to successfully deploy the ME on a gluster and migrate VMs between the hosts. - Gilboa
On Nov 18, 2021, at 7:14 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Thu, Nov 18, 2021 at 2:14 PM Christoph Timm <ovirt@timmi.org> wrote:
looks like they are already aware of it:
Indeed.
https://lists.ovirt.org/archives/list/devel@ovirt.org/thread/BDYP62MAJL2QVQZ...
Now replied there, and created this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=2024605
Also posted now to centos-devel, "qemu-kvm 6.1.0 with 16 PCIE root ports is broken".
For the time being, we know of two workarounds:
1. Use qemu-kvm 6.0.0, available from the advanced virtualization SIG repo, should automatically be enabled by ovirt-release package. So e.g.:
Per host: - Move to maintenance - dnf downgrade qemu-kvm-core-6.0.0 - Activate
2. Configure your engine to use less than 16 pcie root ports, e.g. 12 like here:
https://gerrit.ovirt.org/c/ovirt-system-tests/+/117689
This might be problematic, though, if you need to add many devices to your VMs.
Best regards,
-- Didi _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
participants (6)
-
Christoph Timm
-
Darrell Budic
-
Gary Pedretty
-
Gilboa Davara
-
Yedidyah Bar David
-
Yoann Laissus