Another CI problem: git not working
by Milan Zamazal
Hi,
another problem that has been observed several times in CI runs on
gerrit last days is:
ERROR: Error cloning remote repo 'origin'
[2021-11-18T16:43:38.688Z] hudson.plugins.git.GitException: Command "git fetch --tags --progress https://gerrit.ovirt.org/jenkins +refs/heads/*:refs/remotes/origin/*" returned status code 128:
[2021-11-18T16:43:38.688Z] stdout:
[2021-11-18T16:43:38.688Z] stderr: error: RPC failed; result=6, HTTP code = 0
[2021-11-18T16:43:38.688Z] fatal: The remote end hung up unexpectedly
See https://jenkins.ovirt.org/job/vdsm_standard-check-patch/30774/ for
an example.
Would it be possible to get it fixed?
Thanks,
Milan
3 years, 1 month
qemu-kvm 6.1.0 breaks hosted-engine
by Yedidyah Bar David
Hi all,
For a few days now we have failures in CI of the he-basic suite.
At one point the failure seemed to have been around
networking/routing/firewalling, but later it changed, and now the
deploy process fails while trying to first start the engine vm after
it's copied to the shared storage.
I ran locally OST he-basic with current ost-images, reproduced the
issue, and managed to "fix" it by enabling
ovirt-master-centos-stream-advanced-virtualization-testing and
downgrading qemu-kvm-* from 6.1.0 (from AppStream) to
15:6.0.0-33.el8s.
Is this a known issue?
How do we handle? Perhaps we should conflict with it somewhere until
we find and fix the root cause.
Please note that the flow is:
1. Create a local VM from the appliance image
2. Do stuff on this machine
3. Shut it down
4. Copy its disk to shared storage
5. Start the machine from the shared storage
And that (1.) did work with 6.1.0, and also (5.) did work with 6.0.0
(so the copying (using qemu-img) did work well) and the difference is
elsewhere.
Following is the diff between the qemu commands of (1.) and (5.) (as
found in the respective logs). Any clue?
--- localq 2021-11-16 08:48:01.230426260 +0100
+++ sharedq 2021-11-16 08:48:46.884937598 +0100
@@ -1,54 +1,79 @@
-2021-11-14 15:09:56.430+0000: starting up libvirt version: 7.9.0,
package: 1.module_el8.6.0+983+a7505f3f (CentOS Buildsys
<bugs(a)centos.org>, 2021-11-09-20:38:08, ), qemu version:
6.1.0qemu-kvm-6.1.0-4.module_el8.6.0+983+a7505f3f, kernel:
4.18.0-348.el8.x86_64, hostname:
ost-he-basic-suite-master-host-0.lago.local
+2021-11-14 15:29:10.686+0000: starting up libvirt version: 7.9.0,
package: 1.module_el8.6.0+983+a7505f3f (CentOS Buildsys
<bugs(a)centos.org>, 2021-11-09-20:38:08, ), qemu version:
6.1.0qemu-kvm-6.1.0-4.module_el8.6.0+983+a7505f3f, kernel:
4.18.0-348.el8.x86_64, hostname:
ost-he-basic-suite-master-host-0.lago.local
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
-HOME=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal \
-XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/.local/share \
-XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/.cache \
-XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/.config \
+HOME=/var/lib/libvirt/qemu/domain-2-HostedEngine \
+XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-2-HostedEngine/.local/share \
+XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-2-HostedEngine/.cache \
+XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-2-HostedEngine/.config \
/usr/libexec/qemu-kvm \
--name guest=HostedEngineLocal,debug-threads=on \
+-name guest=HostedEngine,debug-threads=on \
-S \
--object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes"}'
\
--machine pc-q35-rhel8.5.0,accel=kvm,usb=off,dump-guest-core=off,memory-backend=pc.ram
\
--cpu Cascadelake-Server,ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,umip=on,pku=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,kvmclock=on
\
--m 3171 \
--object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":3325034496}' \
+-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-2-HostedEngine/master-key.aes"}'
\
+-machine pc-q35-rhel8.4.0,accel=kvm,usb=off,dump-guest-core=off,graphics=off \
+-cpu Cascadelake-Server-noTSX,mpx=off \
+-m size=3247104k,slots=16,maxmem=12988416k \
-overcommit mem-lock=off \
--smp 2,sockets=2,cores=1,threads=1 \
--uuid 716b26d9-982b-4c51-ac05-646f28346007 \
+-smp 2,maxcpus=32,sockets=16,dies=1,cores=2,threads=1 \
+-object '{"qom-type":"iothread","id":"iothread1"}' \
+-object '{"qom-type":"memory-backend-ram","id":"ram-node0","size":3325034496}'
\
+-numa node,nodeid=0,cpus=0-31,memdev=ram-node0 \
+-uuid a10f5518-1fc2-4aae-b7da-5d1d9875e753 \
+-smbios type=1,manufacturer=oVirt,product=RHEL,version=8.6-1.el8,serial=d2f36f31-bb29-4e1f-b52d-8fddb632953c,uuid=a10f5518-1fc2-4aae-b7da-5d1d9875e753,family=oVirt
\
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=40,server=on,wait=off \
-mon chardev=charmonitor,id=monitor,mode=control \
--rtc base=utc \
+-rtc base=2021-11-14T15:29:08,driftfix=slew \
+-global kvm-pit.lost_tick_policy=delay \
+-no-hpet \
-no-shutdown \
-global ICH9-LPC.disable_s3=1 \
-global ICH9-LPC.disable_s4=1 \
--boot menu=off,strict=on \
+-boot strict=on \
-device pcie-root-port,port=16,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2
\
-device pcie-root-port,port=17,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=18,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=19,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=20,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \
--device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \
--blockdev '{"driver":"file","filename":"/var/tmp/localvm1hjkqhu2/images/b4985de8-fa7e-4b93-a93c-f348ef17d91e/b1614c86-bf90-44c4-9f5d-fd2b3c509934","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}'
\
--blockdev '{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}'
\
--device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-2-format,id=virtio-disk0,bootindex=1
\
--blockdev '{"driver":"file","filename":"/var/tmp/localvm1hjkqhu2/seed.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}'
\
--blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}'
\
--device ide-cd,bus=ide.0,drive=libvirt-1-format,id=sata0-0-0 \
--netdev tap,fd=42,id=hostnet0,vhost=on,vhostfd=43 \
--device virtio-net-pci,netdev=hostnet0,id=net0,mac=54:52:4d:89:07:dc,bus=pci.1,addr=0x0
\
--chardev pty,id=charserial0 \
+-device pcie-root-port,port=21,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \
+-device pcie-root-port,port=22,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \
+-device pcie-root-port,port=23,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x7 \
+-device pcie-root-port,port=24,chassis=9,id=pci.9,bus=pcie.0,multifunction=on,addr=0x3
\
+-device pcie-root-port,port=25,chassis=10,id=pci.10,bus=pcie.0,addr=0x3.0x1 \
+-device pcie-root-port,port=26,chassis=11,id=pci.11,bus=pcie.0,addr=0x3.0x2 \
+-device pcie-root-port,port=27,chassis=12,id=pci.12,bus=pcie.0,addr=0x3.0x3 \
+-device pcie-root-port,port=28,chassis=13,id=pci.13,bus=pcie.0,addr=0x3.0x4 \
+-device pcie-root-port,port=29,chassis=14,id=pci.14,bus=pcie.0,addr=0x3.0x5 \
+-device pcie-root-port,port=30,chassis=15,id=pci.15,bus=pcie.0,addr=0x3.0x6 \
+-device pcie-root-port,port=31,chassis=16,id=pci.16,bus=pcie.0,addr=0x3.0x7 \
+-device pcie-root-port,port=32,chassis=17,id=pci.17,bus=pcie.0,addr=0x4 \
+-device pcie-pci-bridge,id=pci.18,bus=pci.1,addr=0x0 \
+-device qemu-xhci,p2=8,p3=8,id=ua-56e0dd42-5016-4a70-b2b6-7e3bfbc4002f,bus=pci.4,addr=0x0
\
+-device virtio-scsi-pci,iothread=iothread1,id=ua-1ba84ec0-6eb7-4e4c-9e5f-f446e0b2e67c,bus=pci.3,addr=0x0
\
+-device virtio-serial-pci,id=ua-5d6442e9-af6c-459a-8105-6d0cd90214a6,max_ports=16,bus=pci.5,addr=0x0
\
+-device ide-cd,bus=ide.2,id=ua-8a1b74dd-0b24-4f88-9df1-81d4cb7f404c,werror=report,rerror=report
\
+-blockdev '{"driver":"host_device","filename":"/run/vdsm/storage/8468bc65-907a-4c95-8f93-4d29fa722f62/5714a85b-8d09-4ba6-a89a-b39f98e664ff/68f4061e-a537-4051-af1e-baaf04929a25","aio":"native","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}'
\
+-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}'
\
+-device virtio-blk-pci,iothread=iothread1,bus=pci.6,addr=0x0,drive=libvirt-1-format,id=ua-5714a85b-8d09-4ba6-a89a-b39f98e664ff,bootindex=1,write-cache=on,serial=5714a85b-8d09-4ba6-a89a-b39f98e664ff,werror=stop,rerror=stop
\
+-netdev tap,fds=44:45,id=hostua-33528c78-5281-4ebd-a5e2-5e8894d6a4aa,vhost=on,vhostfds=46:47
\
+-device virtio-net-pci,mq=on,vectors=6,host_mtu=1500,netdev=hostua-33528c78-5281-4ebd-a5e2-5e8894d6a4aa,id=ua-33528c78-5281-4ebd-a5e2-5e8894d6a4aa,mac=54:52:4d:89:07:dc,bus=pci.2,addr=0x0
\
+-chardev socket,id=charserial0,fd=48,server=on,wait=off \
-device isa-serial,chardev=charserial0,id=serial0 \
--chardev socket,id=charchannel0,fd=45,server=on,wait=off \
--device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
\
--audiodev id=audio1,driver=none \
--vnc 127.0.0.1:0,sasl=on,audiodev=audio1 \
--device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 \
--object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/random"}' \
--device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.4,addr=0x0 \
+-chardev socket,id=charchannel0,fd=49,server=on,wait=off \
+-device virtserialport,bus=ua-5d6442e9-af6c-459a-8105-6d0cd90214a6.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
\
+-chardev spicevmc,id=charchannel1,name=vdagent \
+-device virtserialport,bus=ua-5d6442e9-af6c-459a-8105-6d0cd90214a6.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0
\
+-chardev socket,id=charchannel2,fd=50,server=on,wait=off \
+-device virtserialport,bus=ua-5d6442e9-af6c-459a-8105-6d0cd90214a6.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0
\
+-audiodev id=audio1,driver=spice \
+-spice port=5900,tls-port=5901,addr=192.168.200.3,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
\
+-device qxl-vga,id=ua-60b147e1-322a-4f49-bb16-0e7a76732396,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1
\
+-device intel-hda,id=ua-fe1a3722-6c25-4719-9d0b-baeeb5d74a3e,bus=pci.18,addr=0x1
\
+-device hda-duplex,id=ua-fe1a3722-6c25-4719-9d0b-baeeb5d74a3e-codec0,bus=ua-fe1a3722-6c25-4719-9d0b-baeeb5d74a3e.0,cad=0,audiodev=audio1
\
+-device virtio-balloon-pci,id=ua-bd2a17f1-4d39-4d4d-8793-089081a2065c,bus=pci.7,addr=0x0
\
+-object '{"qom-type":"rng-random","id":"objua-7e3d85f3-15da-4f97-9434-90396750e2b2","filename":"/dev/urandom"}'
\
+-device virtio-rng-pci,rng=objua-7e3d85f3-15da-4f97-9434-90396750e2b2,id=ua-7e3d85f3-15da-4f97-9434-90396750e2b2,bus=pci.8,addr=0x0
\
+-device vmcoreinfo \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
\
-msg timestamp=on
-char device redirected to /dev/pts/1 (label charserial0)
-2021-11-14 15:18:02.749+0000: Domain id=1 is tainted: custom-ga-command
+2021-11-14T15:44:46.647989Z qemu-kvm: terminating on signal 15 from
pid 21473 (<unknown process>)
Best regards,
--
Didi
3 years, 1 month
suspend resume test broken
by Michal Skrivanek
HI all,
suspend/resume is currently broken on master/el8stream, can anyone please take a look and find out the reason and fix it?
Thanks,
michal
3 years, 1 month
OST: Vdsm: Occasional failures when stopping vdsmd
by Milan Zamazal
Hi,
Michal has observed occasional OST failures in test_vdsm_recovery last
days, which hadn't been seen before. When `systemctl stop vdsmd' is
called (via Ansible) there, vdsmd (almost?) never finishes its shutdown
within the 10 seconds timeout and gets then killed with SIGKILL. If
this action is accompanied by "Job for vdsmd.service canceled." message
then the test fails; otherwise OST continues normally.
The situation is reproducible by running OST basic-suite-master and
making it artificially failing after test_vdsm_recovery. Then running
`systemctl stop vdsmd' manually on the given OST host (can be done
repeatedly, so it provides a good opportunity to examine the problem).
There are two problems there:
- "Job for vdsmd.service canceled." message that sometimes occurs after
`systemctl stop vdsmd' and then the test fails. I don't know what it
means and I can't identify any difference in journal between when the
message occurs and when it doesn't.
- The fact that Vdsm doesn't stop within the timeout and must be killed.
This doesn't happen in my normal oVirt installation. It apparently
blocks in self.irs.prepareForShutdown() call from clientIF.py.
Journal says:
systemd[1]: Stopping Virtual Desktop Server Manager...
systemd[1]: vdsmd.service: State 'stop-sigterm' timed out. Killing.
systemd[1]: vdsmd.service: Killing process 132608 (vdsmd) with signal SIGKILL.
systemd[1]: vdsmd.service: Killing process 133445 (ioprocess) with signal SIGKILL.
systemd[1]: vdsmd.service: Killing process 133446 (ioprocess) with signal SIGKILL.
systemd[1]: vdsmd.service: Killing process 133447 (ioprocess) with signal SIGKILL.
systemd[1]: vdsmd.service: Main process exited, code=killed, status=9/KILL
systemd[1]: vdsmd.service: Failed with result 'timeout'.
systemd[1]: Stopped Virtual Desktop Server Manager.
And vdsm.log (from a different run, sorry):
2021-11-12 07:09:30,274+0000 INFO (MainThread) [vdsm.api] START prepareForShutdown() from=internal, task_id=21b12bbd-1d61-4217-b92d-641a53d5f7bb (api:48)
2021-11-12 07:09:30,317+0000 DEBUG (vmchannels) [vds] VM channels listener thread has ended. (vmchannels:214)
2021-11-12 07:09:30,317+0000 DEBUG (vmchannels) [root] FINISH thread <Thread(vmchannels, stopped daemon 140163095193344)> (concurrent:261)
2021-11-12 07:09:30,516+0000 DEBUG (mailbox-hsm/4) [root] FINISH thread <Thread(mailbox-hsm/4, stopped daemon 140162799089408)> (concurrent:261)
2021-11-12 07:09:30,517+0000 INFO (ioprocess/143197) [IOProcess] (3a729aa1-8e14-4ea0-8794-e3d67fbde542) Starting ioprocess (__init__:465)
2021-11-12 07:09:30,521+0000 INFO (ioprocess/143199) [IOProcess] (ost-he-basic-suite-master-storage:_exports_nfs_share1) Starting ioprocess (__init__:465)
2021-11-12 07:09:30,535+0000 INFO (ioprocess/143193) [IOProcess] (ost-he-basic-suite-master-storage:_exports_nfs_share2) Starting ioprocess (__init__:465)
2021-11-12 07:09:30,679+0000 INFO (ioprocess/143195) [IOProcess] (0187cf2f-2344-48de-a2a0-dd007315399f) Starting ioprocess (__init__:465)
2021-11-12 07:09:30,719+0000 INFO (ioprocess/143192) [IOProcess] (15fa3d6c-671b-46ef-af9a-00337011fa26) Starting ioprocess (__init__:465)
2021-11-12 07:09:30,756+0000 INFO (ioprocess/143194) [IOProcess] (ost-he-basic-suite-master-storage:_exports_nfs_exported) Starting ioprocess (__init__:465)
2021-11-12 07:09:30,768+0000 INFO (ioprocess/143198) [IOProcess] (ost-he-basic-suite-master-storage:_exports_nfs__he) Starting ioprocess (__init__:465)
2021-11-12 07:09:30,774+0000 INFO (ioprocess/143196) [IOProcess] (a8bab4ef-2952-4c42-ba44-dbb3e1b8c87c) Starting ioprocess (__init__:465)
2021-11-12 07:09:30,957+0000 DEBUG (mailbox-hsm/2) [root] FINISH thread <Thread(mailbox-hsm/2, stopped daemon 140162815874816)> (concurrent:261)
2021-11-12 07:09:31,629+0000 INFO (mailbox-hsm) [storage.mailbox.hsmmailmonitor] HSM_MailboxMonitor - Incoming mail monitoring thread stopped, clearing outgoing mail (mailbox:500)
2021-11-12 07:09:31,629+0000 INFO (mailbox-hsm) [storage.mailbox.hsmmailmonitor] HSM_MailMonitor sending mail to SPM - ['/usr/bin/dd', 'of=/rhev/data-center/f54c6052-437f-11ec-9094-54527d140533/mastersd/dom_md/inbox', 'iflag=fullblock', 'oflag=direct', 'conv=notrunc', 'bs=4096', 'count=1', 'seek=2'] (mailbox:382)
2021-11-12 07:09:32,610+0000 DEBUG (mailbox-hsm/1) [root] FINISH thread <Thread(mailbox-hsm/1, stopped daemon 140162841052928)> (concurrent:261)
2021-11-12 07:09:32,792+0000 DEBUG (mailbox-hsm/3) [root] FINISH thread <Thread(mailbox-hsm/3, stopped daemon 140162807482112)> (concurrent:261)
2021-11-12 07:09:32,818+0000 DEBUG (mailbox-hsm/0) [root] FINISH thread <Thread(mailbox-hsm/0, stopped daemon 140162824267520)> (concurrent:261)
2021-11-12 07:09:32,820+0000 INFO (MainThread) [storage.monitor] Shutting down domain monitors (monitor:243)
2021-11-12 07:09:32,820+0000 INFO (MainThread) [storage.monitor] Stop monitoring 15fa3d6c-671b-46ef-af9a-00337011fa26 (shutdown=True) (monitor:268)
2021-11-12 07:09:32,820+0000 INFO (MainThread) [storage.monitor] Stop monitoring a8bab4ef-2952-4c42-ba44-dbb3e1b8c87c (shutdown=True) (monitor:268)
2021-11-12 07:09:32,820+0000 INFO (MainThread) [storage.monitor] Stop monitoring 3a729aa1-8e14-4ea0-8794-e3d67fbde542 (shutdown=True) (monitor:268)
2021-11-12 07:09:32,820+0000 INFO (MainThread) [storage.monitor] Stop monitoring 0187cf2f-2344-48de-a2a0-dd007315399f (shutdown=True) (monitor:268)
2021-11-12 07:09:32,820+0000 INFO (MainThread) [storage.monitor] Stop monitoring 186180cb-5cc5-4aa4-868a-9e1ed7965ddf (shutdown=True) (monitor:268)
2021-11-12 07:09:32,831+0000 INFO (monitor/0187cf2) [storage.check] Stop checking '/rhev/data-center/mnt/ost-he-basic-suite-master-storage:_exports_nfs_exported/0187cf2f-2344-48de-a2a0-dd007315399f/dom_md/metadata' (check:135)
2021-11-12 07:09:32,838+0000 INFO (monitor/a8bab4e) [storage.check] Stop checking '/rhev/data-center/mnt/ost-he-basic-suite-master-storage:_exports_nfs_share1/a8bab4ef-2952-4c42-ba44-dbb3e1b8c87c/dom_md/metadata' (check:135)
2021-11-12 07:09:32,844+0000 INFO (monitor/15fa3d6) [storage.check] Stop checking '/rhev/data-center/mnt/ost-he-basic-suite-master-storage:_exports_nfs__he/15fa3d6c-671b-46ef-af9a-00337011fa26/dom_md/metadata' (check:135)
2021-11-12 07:09:32,864+0000 INFO (monitor/3a729aa) [storage.check] Stop checking '/rhev/data-center/mnt/ost-he-basic-suite-master-storage:_exports_nfs_share2/3a729aa1-8e14-4ea0-8794-e3d67fbde542/dom_md/metadata' (check:135)
2021-11-12 07:09:32,865+0000 INFO (monitor/186180c) [storage.check] Stop checking '/dev/186180cb-5cc5-4aa4-868a-9e1ed7965ddf/metadata' (check:135)
2021-11-12 07:09:32,867+0000 INFO (monitor/186180c) [storage.blocksd] Tearing down domain 186180cb-5cc5-4aa4-868a-9e1ed7965ddf (blockSD:996)
2021-11-12 07:09:32,867+0000 DEBUG (monitor/186180c) [common.commands] /usr/bin/taskset --cpu-list 0-1 /usr/bin/sudo -n /sbin/lvm vgchange --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/36001405423a0b45cef54e6e9fd0c7df4$|^/dev/mapper/36001405e9302e3f55e74aedbf352b79f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 use_lvmpolld=1 } backup { retain_min=50 retain_days=0 }' --available n 186180cb-5cc5-4aa4-868a-9e1ed7965ddf (cwd None) (commands:154)
2021-11-12 07:09:38,063+0000 DEBUG (monitor/186180c) [common.commands] FAILED: <err> = b' Logical volume 186180cb-5cc5-4aa4-868a-9e1ed7965ddf/ids in use.\n Can\'t deactivate volume group "186180cb-5cc5-4aa4-868a-9e1ed7965ddf" with 1 open logical volume(s)\n'; <rc> = 5 (commands:186)
2021-11-12 07:09:38,066+0000 WARN (monitor/186180c) [storage.lvm] Command with specific filter failed or returned no data, retrying with a wider filter: LVM command failed: 'cmd=[\'/sbin/lvm\', \'vgchange\', \'--config\', \'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/36001405423a0b45cef54e6e9fd0c7df4$|^/dev/mapper/36001405e9302e3f55e74aedbf352b79f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 use_lvmpolld=1 } backup { retain_min=50 retain_days=0 }\', \'--available\', \'n\', \'186180cb-5cc5-4aa4-868a-9e1ed7965ddf\'] rc=5 out=[] err=[\' Logical volume 186180cb-5cc5-4aa4-868a-9e1ed7965ddf/ids in use.\', \' Can\\\'t deactivate volume group "186180cb-5cc5-4aa4-868a-9e1ed7965ddf" with 1 open logical volume(s)\']' (lvm:532)
2021-11-12 07:09:38,067+0000 DEBUG (monitor/186180c) [common.commands] /usr/bin/taskset --cpu-list 0-1 /usr/bin/sudo -n /sbin/lvm vgchange --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/36001405423a0b45cef54e6e9fd0c7df4$|^/dev/mapper/36001405d40a7e761def4ed09aac63282$|^/dev/mapper/36001405e9302e3f55e74aedbf352b79f$|^/dev/mapper/36001405ece10ae48e99470295edda1bb$|^/dev/mapper/36001405f269e8e9efb34e5f9170b349b$|^/dev/mapper/36001405f8c5fe6c239e46a0976b842df$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 use_lvmpolld=1 } backup { retain_min=50 retain_days=0 }' --available n 186180cb-5cc5-4aa4-868a-9e1ed7965ddf (cwd None) (commands:154)
What is interesting here is that ioprocess is started here rather than
being killed as on clean vdsmd shutdowns.
Any ideas?
Thanks,
Milan
3 years, 1 month