Re: Migration fails after upgrading target host to oVirt Node 4.5.5
by Jonas
Thanks for your response! I guess my path will be to migrate to el9 ovirt-node then.
Am 25. April 2024 08:18:57 MESZ schrieb "Nathanaël Blanchet" <blanchet(a)abes.fr>:
>Hello,
>I already mentioned this issue, it is because of el8 4.5.5 embedded qemu version that is not compatible with 4.5.4 one. If all your hosts are in 4.5.5 migration is ok but it involves that you shutdown your production vm.
>By the way, the solution if your hardware is compatible (HBA not supported anymore) is to use el9 based ovirt-node 4.5.5 that support migration from any qemu versions.
>
>Le 24 avr. 2024 20:59, jonas(a)rabe.ch a écrit :
>Hello all
>
>After upgrading one node from 4.5.4 to 4.5.5 the migration fails on some of the VMs. I believe the error is with qemu-kvm according to the logs below.
>
>Downgrading the packages seems not to be possible with oVirt Node, the only solution so far was rebooting back to 4.5.4 (thanks to imgbase). I also think it is related to https://lists.ovirt.org/archives/list/users@ovirt.org/thread/GGMEGFI4OGZ5..., but that thread is already several months old. Has someone had the same experience and found a solution?
>
>Packages:
>Before (4.5.4):
>- `current_layer: ovirt-node-ng-4.5.4-0.20221206.0+1`
>- Kernel: 4.18.0-408.el8.x86_64
>- qemu-kvm.x86_64: 15:6.2.0-20.module_el8.7.0+1218+f626c2ff.1
>- vdsm.x86_64: 4.50.3.4-1.el8
>- libvirt.x86_64: 8.0.0-10.module_el8.7.0+1218+f626c2ff
>- ovirt-host.x86_64: 4.5.0-3.el8
>
>After (4.5.5):
>- `current_layer: ovirt-node-ng-4.5.5-0.20231130.0+1`
>- Kernel: 4.18.0-526.el8.x86_64
>- qemu-kvm.x86_64: 15:6.2.0-41.module_el8+690+3a5f4f4f
>- vdsm.x86_64: 4.50.5.1-1.el8
>- libvirt.x86_64: 8.0.0-22.module_el8+596+27e96798
>- ovirt-host.x86_64: 4.5.0-3.el8
>
>/var/log/libvirt/qemu/vm-0008.log:
>2024-04-24 18:05:59.214+0000: starting up libvirt version: 8.0.0, package: 22.module_el8+596+27e96798 (builder(a)centos.org, 2023-07-31-14:36:36, ), qemu version: 6.2.0qemu-kvm-6.2.0-41.module_el8+690+3a5f4f4f, kernel: 4.18.0-526.el8.x86_64, hostname: server-007.XXX.YYY
>LC_ALL=C \
>PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
>HOME=/var/lib/libvirt/qemu/domain-14-vm-0008 \
>XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-14-vm-0008/.local/share \
>XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-14-vm-0008/.cache \
>XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-14-vm-0008/.config \
>/usr/libexec/qemu-kvm \
>-name guest=vm-0008,debug-threads=on \
>-S \
>-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-14-vm-0008/master-key.aes"}' \
>-machine pc-i440fx-rhel7.6.0,usb=off,dump-guest-core=off \
>-accel kvm \
>-cpu Cascadelake-Server-noTSX,mpx=off,hypervisor=on,pku=on,arch-capabilities=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on \
>-m size=8388608k,slots=16,maxmem=16777216k \
>-overcommit mem-lock=off \
>-smp 2,maxcpus=16,sockets=16,dies=1,cores=1,threads=1 \
>-numa node,nodeid=0,cpus=0-15,mem=8192 \
>-uuid 790499a6-1391-4c03-b270-e47ebfb851ff \
>-smbios type=1,manufacturer=oVirt,product=RHEL,version=8.7.2206.0-1.el8,serial=00000000-0000-0000-0000-ac1f6bcbc1de,uuid=790499a6-1391-4c03-b270-e47ebfb851ff,family=oVirt \
>-no-user-config \
>-nodefaults \
>-chardev socket,id=charmonitor,fd=50,server=on,wait=off \
>-mon chardev=charmonitor,id=monitor,mode=control \
>-rtc base=2024-04-24T18:05:58,driftfix=slew \
>-global kvm-pit.lost_tick_policy=delay \
>-no-hpet \
>-no-shutdown \
>-global PIIX4_PM.disable_s3=1 \
>-global PIIX4_PM.disable_s4=1 \
>-boot strict=on \
>-device piix3-usb-uhci,id=ua-ce46234b-4849-495e-81be-f29ac9f354a9,bus=pci.0,addr=0x1.0x2 \
>-device virtio-scsi-pci,id=ua-b36161d3-1a42-496e-8b3b-7fb4fc5844b8,bus=pci.0,addr=0x3 \
>-device virtio-serial-pci,id=ua-6504edef-b6c0-4812-a556-63517376c49e,max_ports=16,bus=pci.0,addr=0x4 \
>-device ide-cd,bus=ide.1,unit=0,id=ua-2bb69b71-f9a3-4c95-b3af-dd7ff63d249f,werror=report,rerror=report \
>-blockdev '{"driver":"file","filename":"/run/vdsm/payload/790499a6-1391-4c03-b270-e47ebfb851ff.img","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' \
>-blockdev '{"node-name":"libvirt-3-format","read-only":true,"driver":"raw","file":"libvirt-3-storage"}' \
>-device ide-cd,bus=ide.1,unit=1,drive=libvirt-3-format,id=ua-b1b8cc32-d221-4c83-8b6d-41c953c731bd,werror=report,rerror=report \
>-blockdev '{"driver":"file","filename":"/rhev/data-center/mnt/glusterSD/server-005.XXX.YYY:_tier1-ovirt-data-01/a047cdc3-1138-406f-89c8-efdc3924ce67/images/a79e4b9e-25ef-444a-87ab-eab1ba2f6eee/2bda74c4-019e-4dc2-ad8d-d867c869e784","aio":"threads","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
>-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
>-device scsi-hd,bus=ua-b36161d3-1a42-496e-8b3b-7fb4fc5844b8.0,channel=0,scsi-id=0,lun=0,device_id=a79e4b9e-25ef-444a-87ab-eab1ba2f6eee,drive=libvirt-2-format,id=ua-a79e4b9e-25ef-444a-87ab-eab1ba2f6eee,bootindex=1,write-cache=on,serial=a79e4b9e-25ef-444a-87ab-eab1ba2f6eee,werror=stop,rerror=stop \
>-blockdev '{"driver":"file","filename":"/rhev/data-center/mnt/glusterSD/server-005.XXX.YYY:_tier1-ovirt-data-01/a047cdc3-1138-406f-89c8-efdc3924ce67/images/c100660a-19c2-474f-af92-6a5bfb5a6698/f15cea6a-d42a-4db6-9f82-4f2c7ca4295d","aio":"threads","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
>-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
>-device scsi-hd,bus=ua-b36161d3-1a42-496e-8b3b-7fb4fc5844b8.0,channel=0,scsi-id=0,lun=1,device_id=c100660a-19c2-474f-af92-6a5bfb5a6698,drive=libvirt-1-format,id=ua-c100660a-19c2-474f-af92-6a5bfb5a6698,write-cache=on,serial=c100660a-19c2-474f-af92-6a5bfb5a6698,werror=stop,rerror=stop \
>-netdev tap,fds=51:53,id=hostua-9f570dc0-c9a6-4f46-9283-25468ace64d1,vhost=on,vhostfds=54:55 \
>-device virtio-net-pci,mq=on,vectors=6,host_mtu=1500,netdev=hostua-9f570dc0-c9a6-4f46-9283-25468ace64d1,id=ua-9f570dc0-c9a6-4f46-9283-25468ace64d1,mac=00:1a:4a:16:01:58,bus=pci.0,addr=0x6 \
>-netdev tap,fds=56:57,id=hostua-34d14d9d-0427-47e0-a2f4-57c9450c8ecc,vhost=on,vhostfds=58:59 \
>-device virtio-net-pci,mq=on,vectors=6,host_mtu=1500,netdev=hostua-34d14d9d-0427-47e0-a2f4-57c9450c8ecc,id=ua-34d14d9d-0427-47e0-a2f4-57c9450c8ecc,mac=00:1a:4a:16:01:59,bus=pci.0,addr=0x7 \
>-chardev socket,id=charchannel0,fd=40,server=on,wait=off \
>-device virtserialport,bus=ua-6504edef-b6c0-4812-a556-63517376c49e.0,nr=1,chardev=charchannel0,id=channel0,name=ovirt-guest-agent.0 \
>-chardev socket,id=charchannel1,fd=45,server=on,wait=off \
>-device virtserialport,bus=ua-6504edef-b6c0-4812-a556-63517376c49e.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 \
>-chardev spicevmc,id=charchannel2,name=vdagent \
>-device virtserialport,bus=ua-6504edef-b6c0-4812-a556-63517376c49e.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 \
>-audiodev '{"id":"audio1","driver":"spice"}' \
>-spice port=5900,tls-port=5901,addr=10.128.16.7,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on \
>-device qxl-vga,id=ua-c2f62823-2d82-4aac-b300-72ceafd6924d,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 \
>-incoming defer \
>-device virtio-balloon-pci,id=ua-0a31723e-6654-4ab8-995d-e0d314964c24,bus=pci.0,addr=0x5 \
>-device vmcoreinfo \
>-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
>-msg timestamp=on
>2024-04-24 18:05:59.214+0000: Domain id=14 is tainted: hook-script
>2024-04-24 18:05:59.215+0000: Domain id=14 is tainted: custom-hypervisor-feature
>2024-04-24T18:05:59.313461Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=8192: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead
>2024-04-24T18:06:04.827095Z qemu-kvm: Missing section footer for 0000:00:01.3/piix4_pm
>2024-04-24T18:06:04.827282Z qemu-kvm: load of migration failed: Invalid argument
>2024-04-24 18:06:05.008+0000: shutting down, reason=failed
>_______________________________________________
>Users mailing list -- users(a)ovirt.org
>To unsubscribe send an email to users-leave(a)ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/R7FADODOTKJ...
3 weeks
Migration fails after upgrading target host to oVirt Node 4.5.5
by jonas@rabe.ch
Hello all
After upgrading one node from 4.5.4 to 4.5.5 the migration fails on some of the VMs. I believe the error is with qemu-kvm according to the logs below.
Downgrading the packages seems not to be possible with oVirt Node, the only solution so far was rebooting back to 4.5.4 (thanks to imgbase). I also think it is related to https://lists.ovirt.org/archives/list/users@ovirt.org/thread/GGMEGFI4OGZ5..., but that thread is already several months old. Has someone had the same experience and found a solution?
Packages:
Before (4.5.4):
- `current_layer: ovirt-node-ng-4.5.4-0.20221206.0+1`
- Kernel: 4.18.0-408.el8.x86_64
- qemu-kvm.x86_64: 15:6.2.0-20.module_el8.7.0+1218+f626c2ff.1
- vdsm.x86_64: 4.50.3.4-1.el8
- libvirt.x86_64: 8.0.0-10.module_el8.7.0+1218+f626c2ff
- ovirt-host.x86_64: 4.5.0-3.el8
After (4.5.5):
- `current_layer: ovirt-node-ng-4.5.5-0.20231130.0+1`
- Kernel: 4.18.0-526.el8.x86_64
- qemu-kvm.x86_64: 15:6.2.0-41.module_el8+690+3a5f4f4f
- vdsm.x86_64: 4.50.5.1-1.el8
- libvirt.x86_64: 8.0.0-22.module_el8+596+27e96798
- ovirt-host.x86_64: 4.5.0-3.el8
/var/log/libvirt/qemu/vm-0008.log:
2024-04-24 18:05:59.214+0000: starting up libvirt version: 8.0.0, package: 22.module_el8+596+27e96798 (builder(a)centos.org, 2023-07-31-14:36:36, ), qemu version: 6.2.0qemu-kvm-6.2.0-41.module_el8+690+3a5f4f4f, kernel: 4.18.0-526.el8.x86_64, hostname: server-007.XXX.YYY
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
HOME=/var/lib/libvirt/qemu/domain-14-vm-0008 \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-14-vm-0008/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-14-vm-0008/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-14-vm-0008/.config \
/usr/libexec/qemu-kvm \
-name guest=vm-0008,debug-threads=on \
-S \
-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-14-vm-0008/master-key.aes"}' \
-machine pc-i440fx-rhel7.6.0,usb=off,dump-guest-core=off \
-accel kvm \
-cpu Cascadelake-Server-noTSX,mpx=off,hypervisor=on,pku=on,arch-capabilities=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on \
-m size=8388608k,slots=16,maxmem=16777216k \
-overcommit mem-lock=off \
-smp 2,maxcpus=16,sockets=16,dies=1,cores=1,threads=1 \
-numa node,nodeid=0,cpus=0-15,mem=8192 \
-uuid 790499a6-1391-4c03-b270-e47ebfb851ff \
-smbios type=1,manufacturer=oVirt,product=RHEL,version=8.7.2206.0-1.el8,serial=00000000-0000-0000-0000-ac1f6bcbc1de,uuid=790499a6-1391-4c03-b270-e47ebfb851ff,family=oVirt \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=50,server=on,wait=off \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=2024-04-24T18:05:58,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-shutdown \
-global PIIX4_PM.disable_s3=1 \
-global PIIX4_PM.disable_s4=1 \
-boot strict=on \
-device piix3-usb-uhci,id=ua-ce46234b-4849-495e-81be-f29ac9f354a9,bus=pci.0,addr=0x1.0x2 \
-device virtio-scsi-pci,id=ua-b36161d3-1a42-496e-8b3b-7fb4fc5844b8,bus=pci.0,addr=0x3 \
-device virtio-serial-pci,id=ua-6504edef-b6c0-4812-a556-63517376c49e,max_ports=16,bus=pci.0,addr=0x4 \
-device ide-cd,bus=ide.1,unit=0,id=ua-2bb69b71-f9a3-4c95-b3af-dd7ff63d249f,werror=report,rerror=report \
-blockdev '{"driver":"file","filename":"/run/vdsm/payload/790499a6-1391-4c03-b270-e47ebfb851ff.img","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-3-format","read-only":true,"driver":"raw","file":"libvirt-3-storage"}' \
-device ide-cd,bus=ide.1,unit=1,drive=libvirt-3-format,id=ua-b1b8cc32-d221-4c83-8b6d-41c953c731bd,werror=report,rerror=report \
-blockdev '{"driver":"file","filename":"/rhev/data-center/mnt/glusterSD/server-005.XXX.YYY:_tier1-ovirt-data-01/a047cdc3-1138-406f-89c8-efdc3924ce67/images/a79e4b9e-25ef-444a-87ab-eab1ba2f6eee/2bda74c4-019e-4dc2-ad8d-d867c869e784","aio":"threads","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
-device scsi-hd,bus=ua-b36161d3-1a42-496e-8b3b-7fb4fc5844b8.0,channel=0,scsi-id=0,lun=0,device_id=a79e4b9e-25ef-444a-87ab-eab1ba2f6eee,drive=libvirt-2-format,id=ua-a79e4b9e-25ef-444a-87ab-eab1ba2f6eee,bootindex=1,write-cache=on,serial=a79e4b9e-25ef-444a-87ab-eab1ba2f6eee,werror=stop,rerror=stop \
-blockdev '{"driver":"file","filename":"/rhev/data-center/mnt/glusterSD/server-005.XXX.YYY:_tier1-ovirt-data-01/a047cdc3-1138-406f-89c8-efdc3924ce67/images/c100660a-19c2-474f-af92-6a5bfb5a6698/f15cea6a-d42a-4db6-9f82-4f2c7ca4295d","aio":"threads","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device scsi-hd,bus=ua-b36161d3-1a42-496e-8b3b-7fb4fc5844b8.0,channel=0,scsi-id=0,lun=1,device_id=c100660a-19c2-474f-af92-6a5bfb5a6698,drive=libvirt-1-format,id=ua-c100660a-19c2-474f-af92-6a5bfb5a6698,write-cache=on,serial=c100660a-19c2-474f-af92-6a5bfb5a6698,werror=stop,rerror=stop \
-netdev tap,fds=51:53,id=hostua-9f570dc0-c9a6-4f46-9283-25468ace64d1,vhost=on,vhostfds=54:55 \
-device virtio-net-pci,mq=on,vectors=6,host_mtu=1500,netdev=hostua-9f570dc0-c9a6-4f46-9283-25468ace64d1,id=ua-9f570dc0-c9a6-4f46-9283-25468ace64d1,mac=00:1a:4a:16:01:58,bus=pci.0,addr=0x6 \
-netdev tap,fds=56:57,id=hostua-34d14d9d-0427-47e0-a2f4-57c9450c8ecc,vhost=on,vhostfds=58:59 \
-device virtio-net-pci,mq=on,vectors=6,host_mtu=1500,netdev=hostua-34d14d9d-0427-47e0-a2f4-57c9450c8ecc,id=ua-34d14d9d-0427-47e0-a2f4-57c9450c8ecc,mac=00:1a:4a:16:01:59,bus=pci.0,addr=0x7 \
-chardev socket,id=charchannel0,fd=40,server=on,wait=off \
-device virtserialport,bus=ua-6504edef-b6c0-4812-a556-63517376c49e.0,nr=1,chardev=charchannel0,id=channel0,name=ovirt-guest-agent.0 \
-chardev socket,id=charchannel1,fd=45,server=on,wait=off \
-device virtserialport,bus=ua-6504edef-b6c0-4812-a556-63517376c49e.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 \
-chardev spicevmc,id=charchannel2,name=vdagent \
-device virtserialport,bus=ua-6504edef-b6c0-4812-a556-63517376c49e.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 \
-audiodev '{"id":"audio1","driver":"spice"}' \
-spice port=5900,tls-port=5901,addr=10.128.16.7,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on \
-device qxl-vga,id=ua-c2f62823-2d82-4aac-b300-72ceafd6924d,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 \
-incoming defer \
-device virtio-balloon-pci,id=ua-0a31723e-6654-4ab8-995d-e0d314964c24,bus=pci.0,addr=0x5 \
-device vmcoreinfo \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2024-04-24 18:05:59.214+0000: Domain id=14 is tainted: hook-script
2024-04-24 18:05:59.215+0000: Domain id=14 is tainted: custom-hypervisor-feature
2024-04-24T18:05:59.313461Z qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=8192: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead
2024-04-24T18:06:04.827095Z qemu-kvm: Missing section footer for 0000:00:01.3/piix4_pm
2024-04-24T18:06:04.827282Z qemu-kvm: load of migration failed: Invalid argument
2024-04-24 18:06:05.008+0000: shutting down, reason=failed
3 weeks
oVirt 4.5.6
by tasnadi.peter@kifu.gov.hu
Hi,
When updating HE I noticed that version 4.5.6 is available, but on this link the 4.5.6 ISO NG image is not available.
https://resources.ovirt.org/pub/ovirt-4.5/iso/ovirt-node-ng-installer/
Even when updating the oVirt node, version 4.5.6 is not available.
The oVirt node was installed using the ng installer.
These versions have always been released once.
What is the reason for this?
Can I upgrade HE to 4.5.6 regardless?
Could the version difference be a problem?
Thanks for the help!
3 weeks, 2 days
Host deploy fails in 4.5.6
by Lars Stolpe
Hello,
I do have the same issue as some others and tryed to follow the instruction to downgrade ansible-core.
Active versions:
oVirt: 4.5.6-1
ansible-core: 2.16.3-2
It is not possible to downgrade to 2.12 due to dependency:
Error:
Problem: problem with installed package ovirt-engine-4.5.6-1.el8.noarch
- package ovirt-engine-4.5.6-1.el8.noarch from @System requires ansible-core >= 2.13.0, but none of the providers can be installed
the error message in deploy log:
"stdout" : "fatal: [blxbwf855]: FAILED! => {\"changed\": true, \"cmd\": [\"vdsm-tool\", \"ovn-config\", \"192.168.1.229\", \"blxbwf855\"], \"delta\": \"0:00:02.749603\", \"end\": \"2024-04-23 11:26:13.304995\", \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2024-04-23 11:26:10.555392\", \"stderr\": \"Traceback (most recent call last):\\n File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 117, in get_network\\n return networks[net_name]\\nKeyError: 'blxbwf855'\\n\\nDuring handling of the above exception, another exception occurred:\\n\\nTraceback (most recent call last):\\n File \\\"/usr/bin/vdsm-tool\\\", line 195, in main\\n return tool_command[cmd][\\\"command\\\"](*args)\\n File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 63, in ovn_config\\n ip_address = get_ip_addr(get_network(network_caps(), net_name))\\n File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 119, in get_network\\n
raise NetworkNotFoundError(net_name)\\nvdsm.tool.ovn_config.NetworkNotFoundError: blxbwf855\", \"stderr_lines\": [\"Traceback (most recent call last):\", \" File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 117, in get_network\", \" return networks[net_name]\", \"KeyError: 'blxbwf855'\", \"\", \"During handling of the above exception, another exception occurred:\", \"\", \"Traceback (most recent call last):\", \" File \\\"/usr/bin/vdsm-tool\\\", line 195, in main\", \" return tool_command[cmd][\\\"command\\\"](*args)\", \" File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 63, in ovn_config\", \" ip_address = get_ip_addr(get_network(network_caps(), net_name))\", \" File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 119, in get_network\", \" raise NetworkNotFoundError(net_name)\", \"vdsm.tool.ovn_config.NetworkNotFoundError: blxbwf855\"], \"stdout\": \"\", \"stdout_lines\": []}",
I hope someone can help.
3 weeks, 2 days
Host deploy failure: Configure OVN for oVirt
by stephan.badenhorst@fnb.co.za
Good day,
I am running into a problem during host deploy via the oVirt engine GUI since upgrading to ovirt-engine-4.5.5-1.el8. The "Configure OVN for oVirt" task seem to fail when trying to run the vdsm-tool ovn-config command. Host deploy used to work fine when the engine was on version 4.5.4.
Anyone that can guide me on the right path to get past this issue?
Does not seem to be a new problem - https://lists.ovirt.org/archives/list/users@ovirt.org/thread/IDLGSBQFX35E...
Log extract:
2024-02-01 14:48:19 SAST - TASK [ovirt-provider-ovn-driver : Configure OVN for oVirt] *********************
.
.
.
"stdout" : "fatal: [mob-r1-l-ovirt-aa-1-23.x.fnb.co.za]: FAILED! => {\"changed\": true, \"cmd\": [\"vdsm-tool\", \"ovn-config\", \"192.168.2.100\", \"host23.mydomain.com\"], \"delta\": \"0:00:00.538143\", \"end
\": \"2024-02-01 14:48:20.596823\", \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2024-02-01 14:48:20.058680\", \"stderr\": \"Traceback (most recent call last):\\n File \\\"/usr/lib/python3.6/site-packages/vdsm/t
ool/ovn_config.py\\\", line 117, in get_network\\n return networks[net_name]\\nKeyError: 'host23.mydomain.com'\\n\\nDuring handling of the above exception, another exception occurred:\\n\\nTraceback (most rec
ent call last):\\n File \\\"/usr/bin/vdsm-tool\\\", line 195, in main\\n return tool_command[cmd][\\\"command\\\"](*args)\\n File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 63, in ovn_config\\n
ip_address = get_ip_addr(get_network(network_caps(), net_name))\\n File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 119, in get_network\\n raise NetworkNotFoundError(net_name)\\nvdsm.tool.ovn
_config.NetworkNotFoundError: host23.mydomain.com\", \"stderr_lines\": [\"Traceback (most recent call last):\", \" File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 117, in get_network
\", \" return networks[net_name]\", \"KeyError: 'host23.mydomain.com'\", \"\", \"During handling of the above exception, another exception occurred:\", \"\", \"Traceback (most recent call last):\", \" File \
\\"/usr/bin/vdsm-tool\\\", line 195, in main\", \" return tool_command[cmd][\\\"command\\\"](*args)\", \" File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 63, in ovn_config\", \" ip_address =
get_ip_addr(get_network(network_caps(), net_name))\", \" File \\\"/usr/lib/python3.6/site-packages/vdsm/tool/ovn_config.py\\\", line 119, in get_network\", \" raise NetworkNotFoundError(net_name)\", \"vdsm.tool.ovn_config.
NetworkNotFoundError: host23.mydomain.com\"], \"stdout\": \"\", \"stdout_lines\": []}",
Thanks in advance!!
Stephan
3 weeks, 2 days
Changing disk QoS causes segfault with IO-Threads enabled (oVirt 4.3.0.4-1.el7)
by jloh@squiz.net
We recently upgraded to 4.3.0 and have found that when changing disk QoS settings on VMs whilst IO-Threads is enabled causes them to segfault and the VM to reboot. We've been able to replicate this across several VMs. VMs with IO-Threads disabled/turned off do not segfault when changing the QoS.
Mar 1 11:49:06 srvXX kernel: IO iothread1[30468]: segfault at fffffffffffffff8 ip 0000557649f2bd24 sp 00007f80de832f60 error 5 in qemu-kvm[5576498dd000+a03000]
Mar 1 11:49:06 srvXX abrt-hook-ccpp: invalid number 'iothread1'
Mar 1 11:49:11 srvXX libvirtd: 2019-03-01 00:49:11.116+0000: 13365: error : qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer
Happy to supply some more logs to someone if they'll help but just wondering whether anyone else has experienced this or knows of a current fix other than turning io-threads off.
Cheers.
3 weeks, 2 days
Hosted Engine deployment failure ovirt 4.3.10
by john.roche@crick.ac.uk
Hi, I'm getting this issue when deploying the hosted engine onto iscsi storage
using ovirt 4.3.10
The Disk volume is new, no issues, using
ThinkSystem SR650 hardware
and CentOS Linux release 7.8.2003 (Core)
I used both the GUI and ansible script and both fail, this is the farthest I can get and it's driving me mad, please help
the errors logs are below
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Initialize lockspace volume]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 5, "changed": true, "cmd": ["hosted-engine", "--reinitialize-lockspace", "--force"],
more errors
"delta": "0:00:00.668430", "end": "2024-04-22 16:52:56.859768", "msg": "non-zero return code", "rc": 1, "start": "2024-04-22 16:52:56.191338", "stderr": "Traceback (most recent call last):\n File \"/usr/lib64/python2.7/runpy.py\", line 162, in _run_module_as_main\n \"__main__\", fname, loader, pkg_name)\n File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\n exec code in run_globals\n File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py\", line 30, in <module>\n ha_cli.reset_lockspace(force)\n File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py\", line 286, in reset_lockspace\n stats = broker.get_stats_from_storage()\n File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", line 146, in get_stats_from_storage\n result = self._proxy.get_stats()\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1233, in __call__\n return self.__send(self.__name, args)\n File \"/usr/lib64/python2.7/
xmlrpclib.py\", line 1591, in __request\n verbose=self.__verbose\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1273, in request\n return self.single_request(host, handler, request_body, verbose)\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1301, in single_request\n self.send_content(h, request_body)\n File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1448, in send_content\n connection.endheaders(request_body)\n File \"/usr/lib64/python2.7/httplib.py\", line 1052, in endheaders\n self._send_output(message_body)\n File \"/usr/lib64/python2.7/httplib.py\", line 890, in _send_output\n self.send(msg)\n File \"/usr/lib64/python2.7/httplib.py\", line 852, in send\n self.connect()\n File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py\", line 60, in connect\n self.sock.connect(base64.b16decode(self.host))\n File \"/usr/lib64/python2.7/socket.py\", line 224, in meth\n return getattr(self._sock,name)(*args)\nsocket.error: [Errno 2] No such file or directory"
, "stderr_lines": ["Traceback (most recent call last):", " File \"/usr/lib64/python2.7/runpy.py\", line 162, in _run_module_as_main", " \"__main__\", fname, loader, pkg_name)", " File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code", " exec code in run_globals", " File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py\", line 30, in <module>", " ha_cli.reset_lockspace(force)", " File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py\", line 286, in reset_lockspace", " stats = broker.get_stats_from_storage()", " File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", line 146, in get_stats_from_storage", " result = self._proxy.get_stats()", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1233, in __call__", " return self.__send(self.__name, args)", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1591, in __request", " verbose=self.__verbose", " File \"/usr/lib64/python2.7/xmlrpcli
b.py\", line 1273, in request", " return self.single_request(host, handler, request_body, verbose)", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1301, in single_request", " self.send_content(h, request_body)", " File \"/usr/lib64/python2.7/xmlrpclib.py\", line 1448, in send_content", " connection.endheaders(request_body)", " File \"/usr/lib64/python2.7/httplib.py\", line 1052, in endheaders", " self._send_output(message_body)", " File \"/usr/lib64/python2.7/httplib.py\", line 890, in _send_output", " self.send(msg)", " File \"/usr/lib64/python2.7/httplib.py\", line 852, in send", " self.connect()", " File \"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py\", line 60, in connect", " self.sock.connect(base64.b16decode(self.host))", " File \"/usr/lib64/python2.7/socket.py\", line 224, in meth", " return getattr(self._sock,name)(*args)", "socket.error: [Errno 2] No such file or directory"], "stdout": "", "stdout_lines": []}
3 weeks, 3 days
Re: [External] : Deploying the self-hosted engine failed
by sattha@tracthai.com
My environment
- running on VMware 8.0
- 8 vCPU
- 32 GB Memory
- 100 GB HDD
- Image ovirt using ovirt-node-ng-installer-4.5.5-2023113015.el8.iso
I have run command "hosted-enfine --deploy" and check log error via file /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20240422103113-qojupj.log
2024-04-22 10:54:37,725+0700 DEBUG otopi.context context.dumpEnvironment:775 ENV SYSTEM/commandPath=str:'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/root/bin'
2024-04-22 10:54:37,725+0700 DEBUG otopi.context context.dumpEnvironment:775 ENV SYSTEM/reboot=bool:'False'
2024-04-22 10:54:37,726+0700 DEBUG otopi.context context.dumpEnvironment:775 ENV SYSTEM/rebootAllow=bool:'True'
2024-04-22 10:54:37,726+0700 DEBUG otopi.context context.dumpEnvironment:775 ENV SYSTEM/rebootDeferTime=int:'10'
2024-04-22 10:54:37,726+0700 DEBUG otopi.context context.dumpEnvironment:779 ENVIRONMENT DUMP - END
2024-04-22 10:54:37,728+0700 DEBUG otopi.context context._executeMethod:127 Stage pre-terminate METHOD otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate
2024-04-22 10:54:37,729+0700 DEBUG otopi.context context._executeMethod:136 otopi.plugins.otopi.dialog.cli.Plugin._pre_terminate condition False
2024-04-22 10:54:37,730+0700 INFO otopi.context context.runSequence:616 Stage: Termination
2024-04-22 10:54:37,731+0700 DEBUG otopi.context context.runSequence:620 STAGE terminate
2024-04-22 10:54:37,732+0700 DEBUG otopi.context context._executeMethod:127 Stage terminate METHOD otopi.plugins.gr_he_common.core.misc.Plugin._terminate
2024-04-22 10:54:37,733+0700 ERROR otopi.plugins.gr_he_common.core.misc misc._terminate:167 Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
2024-04-22 10:54:37,734+0700 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND Log file is located at
2024-04-22 10:54:37,734+0700 DEBUG otopi.plugins.otopi.dialog.human dialog.__logString:204 DIALOG:SEND /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20240422103113-qojupj.log
2024-04-22 10:54:37,736+0700 DEBUG otopi.context context._executeMethod:127 Stage terminate METHOD otopi.plugins.otopi.dialog.human.Plugin._terminate
2024-04-22 10:54:37,743+0700 DEBUG otopi.context context._executeMethod:127 Stage terminate METHOD otopi.plugins.otopi.dialog.machine.Plugin._terminate
2024-04-22 10:54:37,743+0700 DEBUG otopi.context context._executeMethod:136 otopi.plugins.otopi.dialog.machine.Plugin._terminate condition False
2024-04-22 10:54:37,745+0700 DEBUG otopi.context context._executeMethod:127 Stage terminate METHOD otopi.plugins.otopi.core.log.Plugin._terminate
Thank you for support and reply
3 weeks, 3 days
ISCSI multipath issues
by john.roche@crick.ac.uk
Hi I'm using ovirt 4.3.10
iscsi can pick up the target
but multipath -ll doesn't show anything
I've been spending all day trying to get this work again
the multipath.conf has no changes from me
any help?
3 weeks, 6 days
How to Add a host as storage in oVirt-4.5
by Ankit Sharma
Hi Saviours,
I am using ovirt since a decade now with SAN storage and it is working like charm.
Now i want to upgrade storage but want to use host for storage so how can i use and add Local storage and use it for VM's?
Regards,
AS
4 weeks