Ahoj Jirko,
yes it is strange, I have the same version on the newly installed
cluster last week , there are nodes with centos 8 stream (viz my issue
with sw raid during the install oVirt node NG) and it works properly and
there newer qemu version.
rpm -qa qemu-kvm
qemu-kvm-6.2.0-41.module_el8+690+3a5f4f4f.x86_64
I'll try to reinstall this cluster form ovirt node NG to centos and I'll
see, thank you for hints.
Jirka
On 14. 12. 23 16:47, Jiří Sléžka via Users wrote:
Hello,
On 12/14/23 15:22, Jirka Simon wrote:
> Thank you Jean-Louis,
>
> both links are more then 1 year old
>
> but if i try to downgrade kvm-qemu \
>
> dnf downgrade qemu-kvm
> Last metadata expiration check: 6:58:16 ago on Thu 14 Dec 2023
> 08:09:13 AM CET.
> Package qemu-kvm of lowest version already installed, cannot
> downgrade it.
> Dependencies resolved.
> Nothing to do.
> Complete!
interesting, I upgraded to 4.5.5 recently but I have this version of
qemu-kvm
rpm -q qemu-kvm
qemu-kvm-6.2.0-40.module+el8.9.0+1567+092638a5.1.x86_64
but my hosts are Rocky Linux 8 and I have no
ovirt-release-master/ovirt-master-snapshot repos enabled on hosts,
just centos-release-ovirt45 and migration works fine.
Cheers,
Jiri
>
>
> jirka
>
> On 14. 12. 23 13:54, Jean-Louis Dupond via Users wrote:
>>
>>
>> On 14/12/2023 13:17, Jirka Simon wrote:
>>>
>>> source machine is clear, but the destination host (the updated)
>>>
>>> Couldn't destroy incoming VM: Domain not found: no domain with
>>> matching uuid '77f85710-45e7-43ca
>>>
>>> -b0f4-69f87766cc43' (vm:4054)
>>>
>>> 2023-12-14 12:35:44,492+0100 INFO (libvirt/events) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') underlying process
>>> disconnected (vm:1144)
>>> 2023-12-14 12:35:44,492+0100 INFO (libvirt/events) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Release VM resources
>>> (vm:5331)
>>> 2023-12-14 12:35:44,492+0100 INFO (libvirt/events) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Stopping connection
>>> (guestagent:421)
>>> 2023-12-14 12:35:44,493+0100 INFO (libvirt/events) [vdsm.api]
>>> START teardownImage(sdUUID='6cff1d5a-3188-407c-b217-ab33d7b92476',
>>> spUUID='a6ecdb7b-f2d0-44e1-856a-2cf1be90d7bf',
>>> imgUUID='b7f47420-25d7-4
>>> 27f-8925-55f0fe565606', volUUID=None) from=internal,
>>> task_id=caf7cf3f-10a7-42d9-a576-441eff750af7 (api:31)
>>> 2023-12-14 12:35:44,493+0100 INFO (libvirt/events)
>>> [storage.storagedomain] Removing image run directory
>>>
'/run/vdsm/storage/6cff1d5a-3188-407c-b217-ab33d7b92476/b7f47420-25d7-427f-8925-55f0fe565606'
>>> (b
>>> lockSD:1373)
>>> 2023-12-14 12:35:44,493+0100 INFO (libvirt/events)
>>> [storage.fileutils] Removing directory:
>>>
/run/vdsm/storage/6cff1d5a-3188-407c-b217-ab33d7b92476/b7f47420-25d7-427f-8925-55f0fe565606
>>> (fileUtils:195)
>>> 2023-12-14 12:35:44,528+0100 WARN (vm/77f85710) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Couldn't destroy
>>> incoming VM: Domain not found: no domain with matching uuid
>>> '77f85710-45e7-43ca
>>> -b0f4-69f87766cc43' (vm:4054)
>>> 2023-12-14 12:35:44,529+0100 INFO (vm/77f85710) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Changed state to
>>> Down: VM destroyed during the startup (code=10) (vm:1744)
>>> 2023-12-14 12:35:44,530+0100 INFO (vm/77f85710) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Stopping connection
>>> (guestagent:421)
>>> 2023-12-14 12:35:44,535+0100 INFO (jsonrpc/4) [api.virt] START
>>> destroy(gracefulAttempts=1) from=::ffff:10.36.191.25,58054,
>>> flow_id=67218183, vmId=77f85710-45e7-43ca-b0f4-69f87766cc43(api:31)
>>> 2023-12-14 12:35:44,589+0100 INFO (libvirt/events) [storage.lvm]
>>> Deactivating lvs: vg=6cff1d5a-3188-407c-b217-ab33d7b92476
>>> lvs=['9102f9dd-157c-4233-ac33-004f6c11ff73'] (lvm:1850)
>>> 2023-12-14 12:35:44,702+0100 INFO (libvirt/events) [vdsm.api]
>>> FINISH teardownImage return=None from=internal,
>>> task_id=caf7cf3f-10a7-42d9-a576-441eff750af7 (api:37)
>>> 2023-12-14 12:35:44,703+0100 INFO (libvirt/events) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Stopping connection
>>> (guestagent:421)
>>> 2023-12-14 12:35:44,704+0100 WARN (libvirt/events) [root]
>>> Attempting to remove a non existing net user:
>>> ovirtmgmt/77f85710-45e7-43ca-b0f4-69f87766cc43(libvirtnetwork:191)
>>> 2023-12-14 12:35:44,704+0100 INFO (libvirt/events) [vdsm.api]
>>> START
>>> inappropriateDevices(thiefId='77f85710-45e7-43ca-b0f4-69f87766cc43')
>>> from=internal, task_id=5b362297-6aa7-4de3-8c49-3afaa0dc8cbe (ap
>>> i:31)
>>> 2023-12-14 12:35:44,705+0100 INFO (libvirt/events) [vdsm.api]
>>> FINISH inappropriateDevices return=None from=internal,
>>> task_id=5b362297-6aa7-4de3-8c49-3afaa0dc8cbe (api:37)
>>> 2023-12-14 12:35:44,706+0100 WARN (libvirt/events) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') trying to set state
>>> to Down when already Down (vm:711)
>>> 2023-12-14 12:35:44,706+0100 INFO (libvirt/events) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Stopping connection
>>> (guestagent:421)
>>> 2023-12-14 12:35:44,706+0100 INFO (jsonrpc/4) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Can't undefine
>>> disconnected VM '77f85710-45e7-43ca-b0f4-69f87766cc43' (vm:2434)
>>> 2023-12-14 12:35:44,706+0100 INFO (jsonrpc/4) [api.virt] FINISH
>>> destroy return={'status': {'code': 0, 'message':
'Machine
>>> destroyed'}} from=::ffff:10.36.191.25,58054, flow_id=67218183,
>>> vmId=77f85710-4
>>> 5e7-43ca-b0f4-69f87766cc43(api:37)
>>>
>>>
>>> When I tried to migrate it to the different host, it works.
>>> Unfortunetely no VM can be migrated to the affected host.
>>>
>>> then i stopped vmId='77f85710-45e7-43ca-b0f4-69f87766cc43' and
>>> started it and it stared on affected host without any issue.
>>>
>>>
>>> BUT when i tried to migrate out of the affected host and it fails
>>> as well, and there is the error message
>>>
>>> 2023-12-14 12:55:41,444+0100 INFO (libvirt/events) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') CPU running: onResume
>>> (vm:6073)
>>> 2023-12-14 12:55:41,472+0100 ERROR (migsrc/77f85710) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') internal error: qemu
>>> unexpectedly closed the monitor: 2023-12-14T11:55:36.379991Z qemu-kvm:
>>> -numa node,nodeid=0,cpus=0-15,mem=1024: warning: Parameter -numa
>>> node,mem is deprecated, use -numa node,memdev instead
>>> 2023-12-14T11:55:37.613045Z qemu-kvm: Missing section footer for
>>> 0000:00:01.3/piix4_pm
>>> 2023-12-14T11:55:37.613162Z qemu-kvm: load of migration failed:
>>> Invalid argument (migration:331)
>>> 2023-12-14 12:55:41,476+0100 INFO (migsrc/77f85710) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Switching from
>>> State.STARTED to State.FAILED (migration:229)
>>> 2023-12-14 12:55:41,476+0100 ERROR (migsrc/77f85710) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Failed to migrate
>>> (migration:506)
>>> Traceback (most recent call last):
>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py",
>>> line 480, in _regular_run
>>> time.time(), machineParams
>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py",
>>> line 580, in _startUnderlyingMigration
>>> self._perform_with_conv_schedule(duri, muri)
>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py",
>>> line 700, in _perform_with_conv_schedule
>>> self._perform_migration(duri, muri)
>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py",
>>> line 602, in _perform_migration
>>> self._dom.migrateToURI3(duri, params, flags)
>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py",
>>> line 162, in call
>>> return getattr(self._vm._dom, name)(*a, **kw)
>>> File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py",
>>> line 104, in f
>>> ret = attr(*args, **kwargs)
>>> File
>>>
"/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
>>> line 114, in wrapper
>>> ret = f(*args, **kwargs)
>>> File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
>>> line 78, in wrapper
>>> return func(inst, *args, **kwargs)
>>> File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2126,
>>> in migrateToURI3
>>> raise libvirtError('virDomainMigrateToURI3() failed')
>>> libvirt.libvirtError: internal error: qemu unexpectedly closed the
>>> monitor: 2023-12-14T11:55:36.379991Z qemu-kvm: -numa
>>> node,nodeid=0,cpus=0-15,mem=1024: warning: Parameter -numa node,mem
>>> is deprecated, use -numa node,memdev instead
>>> 2023-12-14T11:55:37.613045Z qemu-kvm: Missing section footer for
>>> 0000:00:01.3/piix4_pm
>>> 2023-12-14T11:55:37.613162Z qemu-kvm: load of migration failed:
>>> Invalid argument
>>>
>> The issue is here
>> See for ex
https://bugzilla.redhat.com/show_bug.cgi?id=1730566
>> Or
https://gitlab.com/qemu-project/qemu/-/issues/932
>>
>> Its a qemu bug.
>>
>>> 2023-12-14 12:55:41,477+0100 INFO (migsrc/77f85710) [virt.vm]
>>> (vmId='77f85710-45e7-43ca-b0f4-69f87766cc43') Enabling volume
>>> monitoring (thinp:72)
>>> 2023-12-14 12:55:41,485+0100 INFO (jsonrpc/4) [api.virt] START
>>> getMigrationStatus() from=::ffff:10.36.191.25,58054,
>>> flow_id=59e26abb, vmId=77f85710-45e7-43ca-b0f4-69f87766cc43 (api:31)
>>> 2023-12-14 12:55:41,485+0100 INFO (jsonrpc/4) [api.virt] FINISH
>>> getMigrationStatus return={'status': {'code': 0,
'message':
>>> 'Done'}, 'migrationStats': {'status':
{'code': 12, 'message':
>>> 'Fatal error during migration'}, 'progress': 0}}
>>> from=::ffff:10.36.191.25,58054, flow_id=59e26abb,
>>> vmId=77f85710-45e7-43ca-b0f4-69f87766cc43 (api:37)
>>>
>>> on the affected node,
>>>
>>> rpm -qa qemu-kvm
>>> qemu-kvm-6.2.0-41.module_el8+690+3a5f4f4f.x86_64
>>>
>>> and on the old nodes are
>>>
>>> rpm -qa qemu-kvm
>>> qemu-kvm-6.2.0-20.module_el8.7.0+1218+f626c2ff.1.x86_64
>>>
>>> Jirka
>>>
>>>
>>> On 14. 12. 23 11:21, Jean-Louis Dupond wrote:
>>>> Best to look in the vdsm logs on both source and destination.
>>>> Engine gives no clues :)
>>>>
>>>> Thanks
>>>>
>>>> On 14/12/2023 11:12, Jirka Simon wrote:
>>>>>
>>>>> Hello there,
>>>>>
>>>>> after today's update I have problem with live migration to this
>>>>> host. with message
>>>>>
>>>>>
>>>>> 2023-12-14 10:00:01,089+01 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-11) [67218183] VM
>>>>>
'77f85710-45e7-43ca-b0f4-69f87766cc43'(ca1.access.prod.hq.sldev.cz)
>>>>> was
>>>>> unexpectedly detected as 'Down' on VDS
>>>>> '044b7175-ca36-49b2-b01b-0253f9af7e4f'(ovirt3.corp.sldev.cz)
>>>>> (expected on '858b8951-9b5a-4b8f-994e-4e11788c34d6')
>>>>> 2023-12-14 10:00:01,090+01 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
>>>>> (ForkJoinPool-1-worker-11) [67218183] START,
>>>>> DestroyVDSCommand(HostName = ovirt3.corp.sldev.cz, DestroyVmV
>>>>>
DSCommandParameters:{hostId='044b7175-ca36-49b2-b01b-0253f9af7e4f',
>>>>> vmId='77f85710-45e7-43ca-b0f4-69f87766cc43',
secondsToWait='0',
>>>>> gracefully='false', reason='',
ignoreNoVm='true'}), log id: 696e7f0e
>>>>> 2023-12-14 10:00:01,336+01 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
>>>>> (ForkJoinPool-1-worker-11) [67218183] FINISH, DestroyVDSCommand,
>>>>> return: , log id: 696e7f0e
>>>>> 2023-12-14 10:00:01,337+01 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-11) [67218183] VM
>>>>>
'77f85710-45e7-43ca-b0f4-69f87766cc43'(ca1.access.prod.hq.sldev.cz)
>>>>> was
>>>>> unexpectedly detected as 'Down' on VDS
>>>>> '044b7175-ca36-49b2-b01b-0253f9af7e4f'(ovirt3.corp.sldev.cz)
>>>>> (expected on '858b8951-9b5a-4b8f-994e-4e11788c34d6')
>>>>> 2023-12-14 10:00:01,337+01 ERROR
>>>>> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
>>>>> (ForkJoinPool-1-worker-11) [67218183] Migration of VM
>>>>> 'ca1.access.prod.hq.sldev.cz' to host
'ovirt3.corp.sldev.c
>>>>> z' failed: VM destroyed during the startup.
>>>>>
>>>>> When I stop a VM and start it again it starts on affected without
>>>>> any problem, but migration doesn't work.
>>>>>
>>>>>
>>>>> thank you for any help.
>>>>>
>>>>>
>>>>> Jirka
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list --users(a)ovirt.org
>>>>> To unsubscribe send an email tousers-leave(a)ovirt.org
>>>>> Privacy
Statement:https://www.ovirt.org/privacy-policy.html
>>>>> oVirt Code of
>>>>>
Conduct:https://www.ovirt.org/community/about/community-guidelines/
>>>>> List
>>>>>
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/GG...
>>
>> _______________________________________________
>> Users mailing list --users(a)ovirt.org
>> To unsubscribe send an email tousers-leave(a)ovirt.org
>> Privacy
Statement:https://www.ovirt.org/privacy-policy.html
>> oVirt Code of
>>
Conduct:https://www.ovirt.org/community/about/community-guidelines/
>> List
>>
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/G7...
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LDMHA7B6UEI...
_______________________________________________
Users mailing list --users(a)ovirt.org
To unsubscribe send an email tousers-leave(a)ovirt.org
Privacy
Statement:https://www.ovirt.org/privacy-policy.html
oVirt Code of
Conduct:https://www.ovirt.org/community/about/community-guidelines/
List
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/WK...