Thanks for your response! I guess my path will be to migrate to el9 ovirt-node then.
Am 25. April 2024 08:18:57 MESZ schrieb "Nathanaël Blanchet"
<blanchet(a)abes.fr>:
Hello,
I already mentioned this issue, it is because of el8 4.5.5 embedded qemu version that is
not compatible with 4.5.4 one. If all your hosts are in 4.5.5 migration is ok but it
involves that you shutdown your production vm.
By the way, the solution if your hardware is compatible (HBA not supported anymore) is to
use el9 based ovirt-node 4.5.5 that support migration from any qemu versions.
Le 24 avr. 2024 20:59, jonas(a)rabe.ch a écrit :
Hello all
After upgrading one node from 4.5.4 to 4.5.5 the migration fails on some of the VMs. I
believe the error is with qemu-kvm according to the logs below.
Downgrading the packages seems not to be possible with oVirt Node, the only solution so
far was rebooting back to 4.5.4 (thanks to imgbase). I also think it is related to
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/GGMEGFI4OGZ5...,
but that thread is already several months old. Has someone had the same experience and
found a solution?
Packages:
Before (4.5.4):
- `current_layer: ovirt-node-ng-4.5.4-0.20221206.0+1`
- Kernel: 4.18.0-408.el8.x86_64
- qemu-kvm.x86_64: 15:6.2.0-20.module_el8.7.0+1218+f626c2ff.1
- vdsm.x86_64: 4.50.3.4-1.el8
- libvirt.x86_64: 8.0.0-10.module_el8.7.0+1218+f626c2ff
- ovirt-host.x86_64: 4.5.0-3.el8
After (4.5.5):
- `current_layer: ovirt-node-ng-4.5.5-0.20231130.0+1`
- Kernel: 4.18.0-526.el8.x86_64
- qemu-kvm.x86_64: 15:6.2.0-41.module_el8+690+3a5f4f4f
- vdsm.x86_64: 4.50.5.1-1.el8
- libvirt.x86_64: 8.0.0-22.module_el8+596+27e96798
- ovirt-host.x86_64: 4.5.0-3.el8
/var/log/libvirt/qemu/vm-0008.log:
2024-04-24 18:05:59.214+0000: starting up libvirt version: 8.0.0, package:
22.module_el8+596+27e96798 (builder(a)centos.org, 2023-07-31-14:36:36, ), qemu version:
6.2.0qemu-kvm-6.2.0-41.module_el8+690+3a5f4f4f, kernel: 4.18.0-526.el8.x86_64, hostname:
server-007.XXX.YYY
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
HOME=/var/lib/libvirt/qemu/domain-14-vm-0008 \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-14-vm-0008/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-14-vm-0008/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-14-vm-0008/.config \
/usr/libexec/qemu-kvm \
-name guest=vm-0008,debug-threads=on \
-S \
-object
'{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-14-vm-0008/master-key.aes"}'
\
-machine pc-i440fx-rhel7.6.0,usb=off,dump-guest-core=off \
-accel kvm \
-cpu
Cascadelake-Server-noTSX,mpx=off,hypervisor=on,pku=on,arch-capabilities=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on
\
-m size=8388608k,slots=16,maxmem=16777216k \
-overcommit mem-lock=off \
-smp 2,maxcpus=16,sockets=16,dies=1,cores=1,threads=1 \
-numa node,nodeid=0,cpus=0-15,mem=8192 \
-uuid 790499a6-1391-4c03-b270-e47ebfb851ff \
-smbios
type=1,manufacturer=oVirt,product=RHEL,version=8.7.2206.0-1.el8,serial=00000000-0000-0000-0000-ac1f6bcbc1de,uuid=790499a6-1391-4c03-b270-e47ebfb851ff,family=oVirt
\
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=50,server=on,wait=off \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=2024-04-24T18:05:58,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-shutdown \
-global PIIX4_PM.disable_s3=1 \
-global PIIX4_PM.disable_s4=1 \
-boot strict=on \
-device piix3-usb-uhci,id=ua-ce46234b-4849-495e-81be-f29ac9f354a9,bus=pci.0,addr=0x1.0x2
\
-device virtio-scsi-pci,id=ua-b36161d3-1a42-496e-8b3b-7fb4fc5844b8,bus=pci.0,addr=0x3 \
-device
virtio-serial-pci,id=ua-6504edef-b6c0-4812-a556-63517376c49e,max_ports=16,bus=pci.0,addr=0x4
\
-device
ide-cd,bus=ide.1,unit=0,id=ua-2bb69b71-f9a3-4c95-b3af-dd7ff63d249f,werror=report,rerror=report
\
-blockdev
'{"driver":"file","filename":"/run/vdsm/payload/790499a6-1391-4c03-b270-e47ebfb851ff.img","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}'
\
-blockdev
'{"node-name":"libvirt-3-format","read-only":true,"driver":"raw","file":"libvirt-3-storage"}'
\
-device
ide-cd,bus=ide.1,unit=1,drive=libvirt-3-format,id=ua-b1b8cc32-d221-4c83-8b6d-41c953c731bd,werror=report,rerror=report
\
-blockdev
'{"driver":"file","filename":"/rhev/data-center/mnt/glusterSD/server-005.XXX.YYY:_tier1-ovirt-data-01/a047cdc3-1138-406f-89c8-efdc3924ce67/images/a79e4b9e-25ef-444a-87ab-eab1ba2f6eee/2bda74c4-019e-4dc2-ad8d-d867c869e784","aio":"threads","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}'
\
-blockdev
'{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}'
\
-device
scsi-hd,bus=ua-b36161d3-1a42-496e-8b3b-7fb4fc5844b8.0,channel=0,scsi-id=0,lun=0,device_id=a79e4b9e-25ef-444a-87ab-eab1ba2f6eee,drive=libvirt-2-format,id=ua-a79e4b9e-25ef-444a-87ab-eab1ba2f6eee,bootindex=1,write-cache=on,serial=a79e4b9e-25ef-444a-87ab-eab1ba2f6eee,werror=stop,rerror=stop
\
-blockdev
'{"driver":"file","filename":"/rhev/data-center/mnt/glusterSD/server-005.XXX.YYY:_tier1-ovirt-data-01/a047cdc3-1138-406f-89c8-efdc3924ce67/images/c100660a-19c2-474f-af92-6a5bfb5a6698/f15cea6a-d42a-4db6-9f82-4f2c7ca4295d","aio":"threads","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}'
\
-blockdev
'{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}'
\
-device
scsi-hd,bus=ua-b36161d3-1a42-496e-8b3b-7fb4fc5844b8.0,channel=0,scsi-id=0,lun=1,device_id=c100660a-19c2-474f-af92-6a5bfb5a6698,drive=libvirt-1-format,id=ua-c100660a-19c2-474f-af92-6a5bfb5a6698,write-cache=on,serial=c100660a-19c2-474f-af92-6a5bfb5a6698,werror=stop,rerror=stop
\
-netdev
tap,fds=51:53,id=hostua-9f570dc0-c9a6-4f46-9283-25468ace64d1,vhost=on,vhostfds=54:55 \
-device
virtio-net-pci,mq=on,vectors=6,host_mtu=1500,netdev=hostua-9f570dc0-c9a6-4f46-9283-25468ace64d1,id=ua-9f570dc0-c9a6-4f46-9283-25468ace64d1,mac=00:1a:4a:16:01:58,bus=pci.0,addr=0x6
\
-netdev
tap,fds=56:57,id=hostua-34d14d9d-0427-47e0-a2f4-57c9450c8ecc,vhost=on,vhostfds=58:59 \
-device
virtio-net-pci,mq=on,vectors=6,host_mtu=1500,netdev=hostua-34d14d9d-0427-47e0-a2f4-57c9450c8ecc,id=ua-34d14d9d-0427-47e0-a2f4-57c9450c8ecc,mac=00:1a:4a:16:01:59,bus=pci.0,addr=0x7
\
-chardev socket,id=charchannel0,fd=40,server=on,wait=off \
-device
virtserialport,bus=ua-6504edef-b6c0-4812-a556-63517376c49e.0,nr=1,chardev=charchannel0,id=channel0,name=ovirt-guest-agent.0
\
-chardev socket,id=charchannel1,fd=45,server=on,wait=off \
-device
virtserialport,bus=ua-6504edef-b6c0-4812-a556-63517376c49e.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
\
-chardev spicevmc,id=charchannel2,name=vdagent \
-device
virtserialport,bus=ua-6504edef-b6c0-4812-a556-63517376c49e.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
\
-audiodev
'{"id":"audio1","driver":"spice"}' \
-spice
port=5900,tls-port=5901,addr=10.128.16.7,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
\
-device
qxl-vga,id=ua-c2f62823-2d82-4aac-b300-72ceafd6924d,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
\
-incoming defer \
-device virtio-balloon-pci,id=ua-0a31723e-6654-4ab8-995d-e0d314964c24,bus=pci.0,addr=0x5
\
-device vmcoreinfo \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2024-04-24 18:05:59.214+0000: Domain id=14 is tainted: hook-script
2024-04-24 18:05:59.215+0000: Domain id=14 is tainted: custom-hypervisor-feature
2024-04-24T18:05:59.313461Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=8192: warning:
Parameter -numa node,mem is deprecated, use -numa node,memdev instead
2024-04-24T18:06:04.827095Z qemu-kvm: Missing section footer for 0000:00:01.3/piix4_pm
2024-04-24T18:06:04.827282Z qemu-kvm: load of migration failed: Invalid argument
2024-04-24 18:06:05.008+0000: shutting down, reason=failed
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R7FADODOTKJ...