Il giorno mer 17 nov 2021 alle ore 03:12 Danilo de Paula <
ddepaula(a)redhat.com> ha scritto:
Since you're consuming the CentOS Stream 8 packages (I assume)
and the
CentOS Stream 8 is actually the opened development of the next RHEL minor
release (8.6) [1], it makes
a lot of sense to open BZs against those packages in RHEL-8.6.
Especially since we won't fix those problems in CentOS Stream without
fixing it in RHEL first.
So, if you believe that this is a problem with the package itself (as it
looks like), I strongly suggest opening a BZ against those packages in RHEL.
Didi can you please open a bug against RHEL 8 CentoStream version for
qemu-kvm component?
[1] - This is only true for CentOS Stream 8 (which is a copy of RHEL).
CentOS Stream 9 is the other way around.
Sadly no, from my experience it's still fix in rhel first and then in
CentOS Stream, at least for systemd on CentOS Stream 9.
I would love to see fixes coming to Stream first.
On Tue, Nov 16, 2021 at 5:59 AM Yedidyah Bar David <didi(a)redhat.com>
wrote:
> On Tue, Nov 16, 2021 at 12:42 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
> >
> > On Tue, Nov 16, 2021 at 12:28 PM Sandro Bonazzola <sbonazzo(a)redhat.com>
> wrote:
> >>
> >> +Eduardo Lima and +Danilo Cesar Lemes de Paula FYI
> >>
> >> Il giorno mar 16 nov 2021 alle ore 08:55 Yedidyah Bar David <
> didi(a)redhat.com> ha scritto:
> >>>
> >>> Hi all,
> >>>
> >>> For a few days now we have failures in CI of the he-basic suite.
> >>>
> >>> At one point the failure seemed to have been around
> >>> networking/routing/firewalling, but later it changed, and now the
> >>> deploy process fails while trying to first start the engine vm after
> >>> it's copied to the shared storage.
> >>>
> >>> I ran locally OST he-basic with current ost-images, reproduced the
> >>> issue, and managed to "fix" it by enabling
> >>> ovirt-master-centos-stream-advanced-virtualization-testing and
> >>> downgrading qemu-kvm-* from 6.1.0 (from AppStream) to
> >>> 15:6.0.0-33.el8s.
> >>>
> >>> Is this a known issue?
> >>>
> >>> How do we handle? Perhaps we should conflict with it somewhere until
> >>> we find and fix the root cause.
> >>>
> >>> Please note that the flow is:
> >>>
> >>> 1. Create a local VM from the appliance image
> >
> >
> > How do you create the vm?
>
> With virt-install:
>
>
>
https://github.com/oVirt/ovirt-ansible-collection/blob/master/roles/hoste...
>
> >
> > Are you using libvirt? What is the VM XML used?
> >
> >>>
> >>> 2. Do stuff on this machine
> >>> 3. Shut it down
> >>> 4. Copy its disk to shared storage
> >>> 5. Start the machine from the shared storage
> >
> >
> > Just to be sure - step 1 - 4 works, but step 5 fails with qemu 6.1.0?
>
> It seems so, yes.
>
> >
> >>>
> >>>
> >>> And that (1.) did work with 6.1.0, and also (5.) did work with 6.0.0
> >>> (so the copying (using qemu-img) did work well) and the difference is
> >>> elsewhere.
> >>>
> >>> Following is the diff between the qemu commands of (1.) and (5.) (as
> >>> found in the respective logs). Any clue?
> >>>
> >>> --- localq 2021-11-16 08:48:01.230426260 +0100
> >>> +++ sharedq 2021-11-16 08:48:46.884937598 +0100
> >>> @@ -1,54 +1,79 @@
> >>> -2021-11-14 15:09:56.430+0000: starting up libvirt version: 7.9.0,
> >>> package: 1.module_el8.6.0+983+a7505f3f (CentOS Buildsys
> >>> <bugs(a)centos.org>, 2021-11-09-20:38:08, ), qemu version:
> >>> 6.1.0qemu-kvm-6.1.0-4.module_el8.6.0+983+a7505f3f, kernel:
> >>> 4.18.0-348.el8.x86_64, hostname:
> >>> ost-he-basic-suite-master-host-0.lago.local
> >>> +2021-11-14 15:29:10.686+0000: starting up libvirt version: 7.9.0,
> >>> package: 1.module_el8.6.0+983+a7505f3f (CentOS Buildsys
> >>> <bugs(a)centos.org>, 2021-11-09-20:38:08, ), qemu version:
> >>> 6.1.0qemu-kvm-6.1.0-4.module_el8.6.0+983+a7505f3f, kernel:
> >>> 4.18.0-348.el8.x86_64, hostname:
> >>> ost-he-basic-suite-master-host-0.lago.local
> >>> LC_ALL=C \
> >>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
> >>> -HOME=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal \
> >>>
> -XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/.local/share
> \
> >>>
> -XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/.cache \
> >>>
> -XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/.config \
> >>> +HOME=/var/lib/libvirt/qemu/domain-2-HostedEngine \
> >>>
> +XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-2-HostedEngine/.local/share \
> >>> +XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-2-HostedEngine/.cache \
> >>> +XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-2-HostedEngine/.config \
> >>> /usr/libexec/qemu-kvm \
> >>> --name guest=HostedEngineLocal,debug-threads=on \
> >>> +-name guest=HostedEngine,debug-threads=on \
> >>> -S \
> >>> --object
>
'{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes"}'
> >>> \
> >>> --machine
> pc-q35-rhel8.5.0,accel=kvm,usb=off,dump-guest-core=off,memory-backend=pc.ram
> >>> \
> >>> --cpu
>
Cascadelake-Server,ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvmclock=on
> >>> \
> >>> --m 3171 \
> >>> --object
>
'{"qom-type":"memory-backend-ram","id":"pc.ram","size":3325034496}'
\
> >>> +-object
>
'{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-2-HostedEngine/master-key.aes"}'
> >>> \
> >>> +-machine
> pc-q35-rhel8.4.0,accel=kvm,usb=off,dump-guest-core=off,graphics=off \
> >>> +-cpu Cascadelake-Server-noTSX,mpx=off \
> >>> +-m size=3247104k,slots=16,maxmem=12988416k \
> >>> -overcommit mem-lock=off \
> >>> --smp 2,sockets=2,cores=1,threads=1 \
> >>> --uuid 716b26d9-982b-4c51-ac05-646f28346007 \
> >>> +-smp 2,maxcpus=32,sockets=16,dies=1,cores=2,threads=1 \
> >>> +-object
'{"qom-type":"iothread","id":"iothread1"}'
\
> >>> +-object
>
'{"qom-type":"memory-backend-ram","id":"ram-node0","size":3325034496}'
> >>> \
> >>> +-numa node,nodeid=0,cpus=0-31,memdev=ram-node0 \
> >>> +-uuid a10f5518-1fc2-4aae-b7da-5d1d9875e753 \
> >>> +-smbios
>
type=1,manufacturer=oVirt,product=RHEL,version=8.6-1.el8,serial=d2f36f31-bb29-4e1f-b52d-8fddb632953c,uuid=a10f5518-1fc2-4aae-b7da-5d1d9875e753,family=oVirt
> >>> \
> >>> -no-user-config \
> >>> -nodefaults \
> >>> -chardev socket,id=charmonitor,fd=40,server=on,wait=off \
> >>> -mon chardev=charmonitor,id=monitor,mode=control \
> >>> --rtc base=utc \
> >>> +-rtc base=2021-11-14T15:29:08,driftfix=slew \
> >>> +-global kvm-pit.lost_tick_policy=delay \
> >>> +-no-hpet \
> >>> -no-shutdown \
> >>> -global ICH9-LPC.disable_s3=1 \
> >>> -global ICH9-LPC.disable_s4=1 \
> >>> --boot menu=off,strict=on \
> >>> +-boot strict=on \
> >>> -device
> pcie-root-port,port=16,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2
> >>> \
> >>> -device
> pcie-root-port,port=17,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
> >>> -device
> pcie-root-port,port=18,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
> >>> -device
> pcie-root-port,port=19,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
> >>> -device
> pcie-root-port,port=20,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \
> >>> --device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \
> >>> --blockdev
>
'{"driver":"file","filename":"/var/tmp/localvm1hjkqhu2/images/b4985de8-fa7e-4b93-a93c-f348ef17d91e/b1614c86-bf90-44c4-9f5d-fd2b3c509934","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}'
> >>> \
> >>> --blockdev
>
'{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}'
> >>> \
> >>> --device
> virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-2-format,id=virtio-disk0,bootindex=1
> >>> \
> >>> --blockdev
>
'{"driver":"file","filename":"/var/tmp/localvm1hjkqhu2/seed.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}'
> >>> \
> >>> --blockdev
>
'{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}'
> >>> \
> >>> --device ide-cd,bus=ide.0,drive=libvirt-1-format,id=sata0-0-0 \
> >>> --netdev tap,fd=42,id=hostnet0,vhost=on,vhostfd=43 \
> >>> --device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=54:52:4d:89:07:dc,bus=pci.1,addr=0x0
> >>> \
> >>> --chardev pty,id=charserial0 \
> >>> +-device
> pcie-root-port,port=21,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \
> >>> +-device
> pcie-root-port,port=22,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \
> >>> +-device
> pcie-root-port,port=23,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x7 \
> >>> +-device
> pcie-root-port,port=24,chassis=9,id=pci.9,bus=pcie.0,multifunction=on,addr=0x3
> >>> \
> >>> +-device
> pcie-root-port,port=25,chassis=10,id=pci.10,bus=pcie.0,addr=0x3.0x1 \
> >>> +-device
> pcie-root-port,port=26,chassis=11,id=pci.11,bus=pcie.0,addr=0x3.0x2 \
> >>> +-device
> pcie-root-port,port=27,chassis=12,id=pci.12,bus=pcie.0,addr=0x3.0x3 \
> >>> +-device
> pcie-root-port,port=28,chassis=13,id=pci.13,bus=pcie.0,addr=0x3.0x4 \
> >>> +-device
> pcie-root-port,port=29,chassis=14,id=pci.14,bus=pcie.0,addr=0x3.0x5 \
> >>> +-device
> pcie-root-port,port=30,chassis=15,id=pci.15,bus=pcie.0,addr=0x3.0x6 \
> >>> +-device
> pcie-root-port,port=31,chassis=16,id=pci.16,bus=pcie.0,addr=0x3.0x7 \
> >>> +-device
> pcie-root-port,port=32,chassis=17,id=pci.17,bus=pcie.0,addr=0x4 \
> >>> +-device pcie-pci-bridge,id=pci.18,bus=pci.1,addr=0x0 \
> >>> +-device
> qemu-xhci,p2=8,p3=8,id=ua-56e0dd42-5016-4a70-b2b6-7e3bfbc4002f,bus=pci.4,addr=0x0
> >>> \
> >>> +-device
>
virtio-scsi-pci,iothread=iothread1,id=ua-1ba84ec0-6eb7-4e4c-9e5f-f446e0b2e67c,bus=pci.3,addr=0x0
> >>> \
> >>> +-device
>
virtio-serial-pci,id=ua-5d6442e9-af6c-459a-8105-6d0cd90214a6,max_ports=16,bus=pci.5,addr=0x0
> >>> \
> >>> +-device
>
ide-cd,bus=ide.2,id=ua-8a1b74dd-0b24-4f88-9df1-81d4cb7f404c,werror=report,rerror=report
> >>> \
> >>> +-blockdev
>
'{"driver":"host_device","filename":"/run/vdsm/storage/8468bc65-907a-4c95-8f93-4d29fa722f62/5714a85b-8d09-4ba6-a89a-b39f98e664ff/68f4061e-a537-4051-af1e-baaf04929a25","aio":"native","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}'
> >>> \
> >>> +-blockdev
>
'{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}'
> >>> \
> >>> +-device
>
virtio-blk-pci,iothread=iothread1,bus=pci.6,addr=0x0,drive=libvirt-1-format,id=ua-5714a85b-8d09-4ba6-a89a-b39f98e664ff,bootindex=1,write-cache=on,serial=5714a85b-8d09-4ba6-a89a-b39f98e664ff,werror=stop,rerror=stop
> >>> \
> >>> +-netdev
> tap,fds=44:45,id=hostua-33528c78-5281-4ebd-a5e2-5e8894d6a4aa,vhost=on,vhostfds=46:47
> >>> \
> >>> +-device
>
virtio-net-pci,mq=on,vectors=6,host_mtu=1500,netdev=hostua-33528c78-5281-4ebd-a5e2-5e8894d6a4aa,id=ua-33528c78-5281-4ebd-a5e2-5e8894d6a4aa,mac=54:52:4d:89:07:dc,bus=pci.2,addr=0x0
> >>> \
> >>> +-chardev socket,id=charserial0,fd=48,server=on,wait=off \
> >>> -device isa-serial,chardev=charserial0,id=serial0 \
> >>> --chardev socket,id=charchannel0,fd=45,server=on,wait=off \
> >>> --device
>
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
> >>> \
> >>> --audiodev id=audio1,driver=none \
> >>> --vnc 127.0.0.1:0,sasl=on,audiodev=audio1 \
> >>> --device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 \
> >>> --object
>
'{"qom-type":"rng-random","id":"objrng0","filename":"/dev/random"}'
\
> >>> --device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.4,addr=0x0 \
> >>> +-chardev socket,id=charchannel0,fd=49,server=on,wait=off \
> >>> +-device
>
virtserialport,bus=ua-5d6442e9-af6c-459a-8105-6d0cd90214a6.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
> >>> \
> >>> +-chardev spicevmc,id=charchannel1,name=vdagent \
> >>> +-device
>
virtserialport,bus=ua-5d6442e9-af6c-459a-8105-6d0cd90214a6.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0
> >>> \
> >>> +-chardev socket,id=charchannel2,fd=50,server=on,wait=off \
> >>> +-device
>
virtserialport,bus=ua-5d6442e9-af6c-459a-8105-6d0cd90214a6.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0
> >>> \
> >>> +-audiodev id=audio1,driver=spice \
> >>> +-spice
>
port=5900,tls-port=5901,addr=192.168.200.3,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> >>> \
> >>> +-device
>
qxl-vga,id=ua-60b147e1-322a-4f49-bb16-0e7a76732396,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1
> >>> \
> >>> +-device
> intel-hda,id=ua-fe1a3722-6c25-4719-9d0b-baeeb5d74a3e,bus=pci.18,addr=0x1
> >>> \
> >>> +-device
>
hda-duplex,id=ua-fe1a3722-6c25-4719-9d0b-baeeb5d74a3e-codec0,bus=ua-fe1a3722-6c25-4719-9d0b-baeeb5d74a3e.0,cad=0,audiodev=audio1
> >>> \
> >>> +-device
> virtio-balloon-pci,id=ua-bd2a17f1-4d39-4d4d-8793-089081a2065c,bus=pci.7,addr=0x0
> >>> \
> >>> +-object
>
'{"qom-type":"rng-random","id":"objua-7e3d85f3-15da-4f97-9434-90396750e2b2","filename":"/dev/urandom"}'
> >>> \
> >>> +-device
>
virtio-rng-pci,rng=objua-7e3d85f3-15da-4f97-9434-90396750e2b2,id=ua-7e3d85f3-15da-4f97-9434-90396750e2b2,bus=pci.8,addr=0x0
> >>> \
> >>> +-device vmcoreinfo \
> >>> -sandbox
> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
> >>> \
> >>> -msg timestamp=on
> >>> -char device redirected to /dev/pts/1 (label charserial0)
> >>> -2021-11-14 15:18:02.749+0000: Domain id=1 is tainted:
> custom-ga-command
> >>> +2021-11-14T15:44:46.647989Z qemu-kvm: terminating on signal 15 from
> >>> pid 21473 (<unknown process>)
> >
> >
> > It looks like qemu did start but was terminated by some unknown process.
>
> Yes, I didn't say qemu didn't start - it started, but I didn't get
> anything (not even BIOS messages) on the console.
> In the above snippet, you can see that it was killed after 15 minutes.
> IIRC it was me, manually (with 'hosted-engine --vm-poweroff').
> --
> Didi
>
>
--
Danilo de Paula
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*