Cannot delete pvc attached to pod using ovirt-csi in kubernetes

Hi all, I deployed ovirt-csi in the k8s by applying yaml manually. I used the latest version of the container image. (https://github.com/openshift/ovirt-csi-driver-operator/tree/master/assets) After successfully creating pvc and pod, I tried to delete it. And the pod is deleted, but the pvc is not deleted. This is because deleting a pod does not unmap /dev/rbd0 attached to the ovirt vm. How can I delete the pvc successfully? oVirt engine version is 4.4.7.6-1.el8. Here is the engine log when deleting the pod: 2021-08-20 17:40:35,385+09 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2021-08-20 17:40:35,403+09 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) [68ee3182] Running command: CreateUserSessionCommand internal: false. 2021-08-20 17:40:35,517+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [68ee3182] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '192.168.7.169' using session 'XfDgNkmAGnPiZahK5itLhHQTCNHZ3JwXMMzOiZrYL3C32+1TTys3xcjrAmCIKPu02hgN1sdVpfZXWd0FznaPCQ==' logged in. 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,663+09 INFO [org.ovirt.engine.core.bll.storage.disk.DetachDiskFromVmCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Running command: DetachDiskFromVmCommand internal: false. Entities affected : ID: 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER 2021-08-20 17:40:35,664+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] START, HotUnPlugDiskVDSCommand(HostName = host, HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmId='59a7461c-72fe-4e01-86a7-c70243f31596', diskId='63a64445-1659-4d5f-8847-e7266e64b09e'}), log id: 506ff4a4 2021-08-20 17:40:35,678+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Disk hot-unplug: <?xml version="1.0" encoding="UTF-8"?><hotunplug> <devices> <disk> <alias name="ua-63a64445-1659-4d5f-8847-e7266e64b09e"/> </disk> </devices> </hotunplug> 2021-08-20 17:40:35,749+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] FINISH, HotUnPlugDiskVDSCommand, return: , log id: 506ff4a4 2021-08-20 17:40:35,842+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] EVENT_ID: USER_DETACH_DISK_FROM_VM(2,018), Disk pvc-9845a0ff-e94c-497c-8c65-fc6a1e26db20 was successfully detached from VM centos by admin@internal-authz. 2021-08-20 17:40:35,916+09 ERROR [org.ovirt.engine.core.sso.service.SsoService] (default task-150) [] OAuthException invalid_grant: The provided authorization grant for the auth code has expired. 2021-08-20 17:40:35,917+09 ERROR [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default task-149) [] Cannot authenticate using authentication Headers: invalid_grant: The provided authorization grant for the auth code has expired. 2021-08-20 17:40:36,029+09 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2021-08-20 17:40:36,046+09 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) [4c4bf441] Running command: CreateUserSessionCommand internal: false. 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:49,241+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] START, DumpXmlsVDSCommand(HostName = host, Params:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmIds='[59a7461c-72fe-4e01-86a7-c70243f31596]'}), log id: 7eb54202 2021-08-20 17:40:49,244+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] FINISH, DumpXmlsVDSCommand, return: {59a7461c-72fe-4e01-86a7-c70243f31596=<domain type='kvm' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>centos</name> <uuid>59a7461c-72fe-4e01-86a7-c70243f31596</uuid> <metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns1:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:balloonTarget type="int">4194304</ovirt-vm:balloonTarget> <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled> <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1628558564.8754532</ovirt-vm:startTime> <ovirt-vm:device alias="ua-7c9f38e9-8889-46c8-83bb-92efb9272de9" mac_address="56:6f:16:a8:00:07"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>2</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/sda</ovirt-vm:guestName> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> <ovirt-vm:poolID>4ca6e0e8-e3a4-11eb-8830-480fcf63834f</ovirt-vm:poolID> <ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:path> <ovirt-vm:volumeID>4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:path> <ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sdc"> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> </ovirt-vm:device> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>16777216</maxMemory> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' current='2'>16</vcpu> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>RHEL</entry> <entry name='version'>8.4-1.2105.el8</entry> <entry name='serial'>83e66af8-0500-11e6-9c43-bc00007c0000</entry> <entry name='uuid'>59a7461c-72fe-4e01-86a7-c70243f31596</entry> <entry name='family'>oVirt</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-q35-rhel8.4.0'>hvm</type> <boot dev='hd'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Skylake-Client-noTSX-IBRS</model> <topology sockets='16' dies='1' cores='1' threads='1'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='md-clear'/> <feature policy='disable' name='mpx'/> <feature policy='require' name='hypervisor'/> <numa> <cell id='0' cpus='0-15' memory='4194304' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='sdc' bus='sata'/> <readonly/> <alias name='ua-df0ac774-3623-4868-8bd3-45c8f2aa3dc4'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads'/> <source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4' index='8'> <seclabel model='dac' relabel='no'/> </source> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> </backingStore> <target dev='sda' bus='scsi'/> <serial>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</serial> <alias name='ua-bee44276-234f-4ed7-8a8a-d90a5e3cb5b3'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x16'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0x17'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x18'/> <alias name='pci.9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='10' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='10' port='0x19'/> <alias name='pci.10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> </controller> <controller type='pci' index='11' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='11' port='0x1a'/> <alias name='pci.11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> </controller> <controller type='pci' index='12' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='12' port='0x1b'/> <alias name='pci.12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> </controller> <controller type='pci' index='13' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='13' port='0x1c'/> <alias name='pci.13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> </controller> <controller type='pci' index='14' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='14' port='0x1d'/> <alias name='pci.14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> </controller> <controller type='pci' index='15' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='15' port='0x1e'/> <alias name='pci.15'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> </controller> <controller type='pci' index='16' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='16' port='0x1f'/> <alias name='pci.16'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='ua-82c49f93-c4e8-460b-bb7d-95db0e9d87a0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='8'> <alias name='ua-ad56daea-edb1-45c7-a1ab-2a7db3aaeee2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='56:6f:16:a8:00:07'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='2'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-7c9f38e9-8889-46c8-83bb-92efb9272de9'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.7.18' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.7.18' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-799f065a-b2b9-4e37-a502-f86c7cc8dc51'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-c2bfe0b9-065a-46b7-9b0b-ef7e0f699611'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-fa110e6b-5eed-4b4b-93d8-0ac5de08aa2e'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c437,c650</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c437,c650</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> <qemu:capabilities> <qemu:add capability='blockdev'/> <qemu:add capability='incremental-backup'/> </qemu:capabilities> </domain> }, log id: 7eb54202 Here is the engine log when deleting the pvc: 2021-08-20 17:43:12,964 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties() 2021-08-20 17:43:12,990 - cinderlib-client - INFO - Deleting volume '63a64445-1659-4d5f-8847-e7266e64b09e' [feefc62f-e7cb-435d-ae21-4b52b53fbdfa] 2021-08-20 17:43:28,856 - cinder.volume.drivers.rbd - WARNING - ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. 2021-08-20 17:43:28,900 - cinderlib-client - ERROR - Failure occurred when trying to run command 'delete_volume': ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. [feefc62f-e7cb-435d-ae21-4b52b53fbdfa] 2021-08-20 17:43:28,901 - cinder - CRITICAL - Unhandled error Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1195, in delete_volume _try_remove_volume(client, volume_name) File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 683, in _wrapper return r.call(f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 409, in call do = self.iter(retry_state=retry_state) File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 368, in iter raise retry_exc.reraise() File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 186, in reraise raise self.last_attempt.result() File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 425, in result return self.__get_result() File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result raise self._exception File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 412, in call result = fn(*args, **kwargs) File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1190, in _try_remove_volume self.RBDProxy().remove(client.ioctx, volume_name) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call rv = execute(f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute six.reraise(c, e, tb) File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker rv = meth(*args, **kwargs) File "rbd.pyx", line 767, in rbd.RBD.remove rbd.ImageBusy: [errno 16] RBD image is busy (error removing image) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./cinderlib-client.py", line 170, in main args.command(args) File "./cinderlib-client.py", line 218, in delete_volume vol.delete() File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 494, in delete self._raise_with_resource() File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 222, in _raise_with_resource six.reraise(*exc_info) File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 487, in delete self.backend.driver.delete_volume(self._ovo) File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1205, in delete_volume raise exception.VolumeIsBusy(msg, volume_name=volume_name) cinder.exception.VolumeIsBusy: ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./cinderlib-client.py", line 390, in <module> sys.exit(main(sys.argv[1:])) File "./cinderlib-client.py", line 176, in main sys.stderr.write(traceback.format_exc(e)) File "/usr/lib64/python3.6/traceback.py", line 167, in format_exc return "".join(format_exception(*sys.exc_info(), limit=limit, chain=chain)) File "/usr/lib64/python3.6/traceback.py", line 121, in format_exception type(value), value, tb, limit=limit).format(chain=chain)) File "/usr/lib64/python3.6/traceback.py", line 498, in __init__ _seen=_seen) File "/usr/lib64/python3.6/traceback.py", line 509, in __init__ capture_locals=capture_locals) File "/usr/lib64/python3.6/traceback.py", line 338, in extract if limit >= 0: TypeError: '>=' not supported between instances of 'VolumeIsBusy' and 'int'

pod deletion should invoke unpublish the PVC which detaches it from the node which is seen in the engine log: 2021-08-20 17:40:35,664+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] START, HotUnPlugDiskVDSCommand(HostName = host, HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmId='59a7461c-72fe-4e01-86a7-c70243f31596', diskId='63a64445-1659-4d5f-8847-e7266e64b09e'}), log id: 506ff4a4 2021-08-20 17:40:35,678+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Disk hot-unplug: <?xml version="1.0" encoding="UTF-8"?><hotunplug> <devices> <disk> <alias name="ua-63a64445-1659-4d5f-8847-e7266e64b09e"/> </disk> </devices> </hotunplug> 2021-08-20 17:40:35,749+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] FINISH, HotUnPlugDiskVDSCommand, return: , log id: 506ff4a4 2021-08-20 17:40:35,842+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] EVENT_ID: USER_DETACH_DISK_FROM_VM(2,018), Disk pvc-9845a0ff-e94c-497c-8c65-fc6a1e26db20 was successfully detached from VM centos by admin@internal-authz. I suspect something keeps the volume busy, can run: $ rbd status <pool_name>/volume-63a64445-1659-4d5f-8847-e7266e64b09e On Mon, Aug 23, 2021 at 3:56 AM <ssarang520@gmail.com> wrote:
Hi all,
I deployed ovirt-csi in the k8s by applying yaml manually. I used the latest version of the container image. (https://github.com/openshift/ovirt-csi-driver-operator/tree/master/assets)
After successfully creating pvc and pod, I tried to delete it. And the pod is deleted, but the pvc is not deleted. This is because deleting a pod does not unmap /dev/rbd0 attached to the ovirt vm.
How can I delete the pvc successfully?
oVirt engine version is 4.4.7.6-1.el8. Here is the engine log when deleting the pod:
2021-08-20 17:40:35,385+09 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2021-08-20 17:40:35,403+09 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) [68ee3182] Running command: CreateUserSessionCommand internal: false. 2021-08-20 17:40:35,517+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [68ee3182] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '192.168.7.169' using session 'XfDgNkmAGnPiZahK5itLhHQTCNHZ3JwXMMzOiZrYL3C32+1TTys3xcjrAmCIKPu02hgN1sdVpfZXWd0FznaPCQ==' logged in. 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,663+09 INFO [org.ovirt.engine.core.bll.storage.disk.DetachDiskFromVmCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Running command: DetachDiskFromVmCommand internal: false. Entities affected : ID: 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER 2021-08-20 17:40:35,664+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] START, HotUnPlugDiskVDSCommand(HostName = host, HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmId='59a7461c-72fe-4e01-86a7-c70243f31596', diskId='63a64445-1659-4d5f-8847-e7266e64b09e'}), log id: 506ff4a4 2021-08-20 17:40:35,678+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Disk hot-unplug: <?xml version="1.0" encoding="UTF-8"?><hotunplug> <devices> <disk> <alias name="ua-63a64445-1659-4d5f-8847-e7266e64b09e"/> </disk> </devices> </hotunplug>
2021-08-20 17:40:35,749+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] FINISH, HotUnPlugDiskVDSCommand, return: , log id: 506ff4a4 2021-08-20 17:40:35,842+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] EVENT_ID: USER_DETACH_DISK_FROM_VM(2,018), Disk pvc-9845a0ff-e94c-497c-8c65-fc6a1e26db20 was successfully detached from VM centos by admin@internal-authz. 2021-08-20 17:40:35,916+09 ERROR [org.ovirt.engine.core.sso.service.SsoService] (default task-150) [] OAuthException invalid_grant: The provided authorization grant for the auth code has expired. 2021-08-20 17:40:35,917+09 ERROR [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default task-149) [] Cannot authenticate using authentication Headers: invalid_grant: The provided authorization grant for the auth code has expired. 2021-08-20 17:40:36,029+09 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2021-08-20 17:40:36,046+09 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) [4c4bf441] Running command: CreateUserSessionCommand internal: false. 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:49,241+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] START, DumpXmlsVDSCommand(HostName = host, Params:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmIds='[59a7461c-72fe-4e01-86a7-c70243f31596]'}), log id: 7eb54202 2021-08-20 17:40:49,244+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] FINISH, DumpXmlsVDSCommand, return: {59a7461c-72fe-4e01-86a7-c70243f31596=<domain type='kvm' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>centos</name> <uuid>59a7461c-72fe-4e01-86a7-c70243f31596</uuid> <metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns1:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:balloonTarget type="int">4194304</ovirt-vm:balloonTarget> <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled> <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1628558564.8754532</ovirt-vm:startTime> <ovirt-vm:device alias="ua-7c9f38e9-8889-46c8-83bb-92efb9272de9" mac_address="56:6f:16:a8:00:07"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>2</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/sda</ovirt-vm:guestName> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> <ovirt-vm:poolID>4ca6e0e8-e3a4-11eb-8830-480fcf63834f</ovirt-vm:poolID> <ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:path> <ovirt-vm:volumeID>4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:path> <ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sdc"> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> </ovirt-vm:device> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>16777216</maxMemory> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' current='2'>16</vcpu> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>RHEL</entry> <entry name='version'>8.4-1.2105.el8</entry> <entry name='serial'>83e66af8-0500-11e6-9c43-bc00007c0000</entry> <entry name='uuid'>59a7461c-72fe-4e01-86a7-c70243f31596</entry> <entry name='family'>oVirt</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-q35-rhel8.4.0'>hvm</type> <boot dev='hd'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Skylake-Client-noTSX-IBRS</model> <topology sockets='16' dies='1' cores='1' threads='1'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='md-clear'/> <feature policy='disable' name='mpx'/> <feature policy='require' name='hypervisor'/> <numa> <cell id='0' cpus='0-15' memory='4194304' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='sdc' bus='sata'/> <readonly/> <alias name='ua-df0ac774-3623-4868-8bd3-45c8f2aa3dc4'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads'/> <source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4' index='8'> <seclabel model='dac' relabel='no'/> </source> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> </backingStore> <target dev='sda' bus='scsi'/> <serial>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</serial> <alias name='ua-bee44276-234f-4ed7-8a8a-d90a5e3cb5b3'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x16'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0x17'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x18'/> <alias name='pci.9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='10' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='10' port='0x19'/> <alias name='pci.10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> </controller> <controller type='pci' index='11' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='11' port='0x1a'/> <alias name='pci.11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> </controller> <controller type='pci' index='12' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='12' port='0x1b'/> <alias name='pci.12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> </controller> <controller type='pci' index='13' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='13' port='0x1c'/> <alias name='pci.13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> </controller> <controller type='pci' index='14' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='14' port='0x1d'/> <alias name='pci.14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> </controller> <controller type='pci' index='15' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='15' port='0x1e'/> <alias name='pci.15'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> </controller> <controller type='pci' index='16' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='16' port='0x1f'/> <alias name='pci.16'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='ua-82c49f93-c4e8-460b-bb7d-95db0e9d87a0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='8'> <alias name='ua-ad56daea-edb1-45c7-a1ab-2a7db3aaeee2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='56:6f:16:a8:00:07'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='2'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-7c9f38e9-8889-46c8-83bb-92efb9272de9'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.7.18' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.7.18' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-799f065a-b2b9-4e37-a502-f86c7cc8dc51'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-c2bfe0b9-065a-46b7-9b0b-ef7e0f699611'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-fa110e6b-5eed-4b4b-93d8-0ac5de08aa2e'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c437,c650</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c437,c650</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> <qemu:capabilities> <qemu:add capability='blockdev'/> <qemu:add capability='incremental-backup'/> </qemu:capabilities> </domain> }, log id: 7eb54202
Here is the engine log when deleting the pvc:
2021-08-20 17:43:12,964 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties() 2021-08-20 17:43:12,990 - cinderlib-client - INFO - Deleting volume '63a64445-1659-4d5f-8847-e7266e64b09e' [feefc62f-e7cb-435d-ae21-4b52b53fbdfa] 2021-08-20 17:43:28,856 - cinder.volume.drivers.rbd - WARNING - ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. 2021-08-20 17:43:28,900 - cinderlib-client - ERROR - Failure occurred when trying to run command 'delete_volume': ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. [feefc62f-e7cb-435d-ae21-4b52b53fbdfa] 2021-08-20 17:43:28,901 - cinder - CRITICAL - Unhandled error Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1195, in delete_volume _try_remove_volume(client, volume_name) File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 683, in _wrapper return r.call(f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 409, in call do = self.iter(retry_state=retry_state) File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 368, in iter raise retry_exc.reraise() File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 186, in reraise raise self.last_attempt.result() File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 425, in result return self.__get_result() File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result raise self._exception File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 412, in call result = fn(*args, **kwargs) File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1190, in _try_remove_volume self.RBDProxy().remove(client.ioctx, volume_name) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call rv = execute(f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute six.reraise(c, e, tb) File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker rv = meth(*args, **kwargs) File "rbd.pyx", line 767, in rbd.RBD.remove rbd.ImageBusy: [errno 16] RBD image is busy (error removing image)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "./cinderlib-client.py", line 170, in main args.command(args) File "./cinderlib-client.py", line 218, in delete_volume vol.delete() File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 494, in delete self._raise_with_resource() File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 222, in _raise_with_resource six.reraise(*exc_info) File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 487, in delete self.backend.driver.delete_volume(self._ovo) File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1205, in delete_volume raise exception.VolumeIsBusy(msg, volume_name=volume_name) cinder.exception.VolumeIsBusy: ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "./cinderlib-client.py", line 390, in <module> sys.exit(main(sys.argv[1:])) File "./cinderlib-client.py", line 176, in main sys.stderr.write(traceback.format_exc(e)) File "/usr/lib64/python3.6/traceback.py", line 167, in format_exc return "".join(format_exception(*sys.exc_info(), limit=limit, chain=chain)) File "/usr/lib64/python3.6/traceback.py", line 121, in format_exception type(value), value, tb, limit=limit).format(chain=chain)) File "/usr/lib64/python3.6/traceback.py", line 498, in __init__ _seen=_seen) File "/usr/lib64/python3.6/traceback.py", line 509, in __init__ capture_locals=capture_locals) File "/usr/lib64/python3.6/traceback.py", line 338, in extract if limit >= 0: TypeError: '>=' not supported between instances of 'VolumeIsBusy' and 'int' _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFYZX2HJZ3RBJS...

When I check the status of the rbd volume, watcher still exists. Wathcer is /dev/rbd0 in the ovirt vm. $ rbd status mypool/volume-3643db6c-38a6-4a21-abb3-ce8cc15e8c86 Watchers: watcher=192.168.7.18:0/1903159992 client.44942 cookie=18446462598732840963 And the attachment information was also left in the volume_attachment of ovirt_cinderlib DB. After manually unmap /dev/rbd0 in the ovirt vm and delete the db row, the pvc was deleted normally. Shouldn't those tasks be done when deleting the pod?

It should do this and it's not semantically different from what happens with non-MBS disks. The log I pasted is what unmaps the volume, I am not sure why it returned successfully if the volume wasn't unmapped, if possible please attach vdsm and supervdsm logs from the relevant, perhaps there's some clue there. But we essentially use cinderlib's `disconnect`, so perhaps it hasn't errored On Mon, Aug 23, 2021 at 10:05 AM <ssarang520@gmail.com> wrote:
When I check the status of the rbd volume, watcher still exists. Wathcer is /dev/rbd0 in the ovirt vm. $ rbd status mypool/volume-3643db6c-38a6-4a21-abb3-ce8cc15e8c86 Watchers: watcher=192.168.7.18:0/1903159992 client.44942 cookie=18446462598732840963
And the attachment information was also left in the volume_attachment of ovirt_cinderlib DB.
After manually unmap /dev/rbd0 in the ovirt vm and delete the db row, the pvc was deleted normally. Shouldn't those tasks be done when deleting the pod? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PSL4JPAMEQ5NHI...

There were no error logs in vdsm and supervdsm. And I found that the [org.ovirt.engine.core.bll.storage.disk.managedblock.DisconnectManagedBlockStorageDeviceCommand] and [org.ovirt.engine.core.vdsbroker.vdsbroker.DetachManagedBlockStorageVolumeVDSCommand] functions are being called when the disk is detached from ovirt vm. However, in the log I gave first, there is no part where the correspoding functions are called, isn't it a bug? Here is the engine log where detaching the disk: 2021-08-23 10:29:43,972+09 INFO [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (default task-176) [2538ba78-6916-431c-b3bc-b98b26515842] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f17702e4-ba97-4f95-a6d4-b89de003bd26=DISK]', sharedLocks='[59a7461c-72fe-4e01-86a7-c70243f31596=VM]'}' 2021-08-23 10:29:44,054+09 INFO [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2538ba78-6916-431c-b3bc-b98b26515842] Running command: HotUnPlugDiskFromVmCommand internal: false. Entities affected : ID: 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER 2021-08-23 10:29:44,076+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2538ba78-6916-431c-b3bc-b98b26515842] START, HotUnPlugDiskVDSCommand(HostName = host, HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmId='59a7461c-72fe-4e01-86a7-c70243f31596', diskId='f17702e4-ba97-4f95-a6d4-b89de003bd26'}), log id: 1c39f09a 2021-08-23 10:29:44,078+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2538ba78-6916-431c-b3bc-b98b26515842] Disk hot-unplug: <?xml version="1.0" encoding="UTF-8"?><hotunplug> <devices> <disk> <alias name="ua-f17702e4-ba97-4f95-a6d4-b89de003bd26"/> </disk> </devices> </hotunplug> 2021-08-23 10:29:44,218+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2538ba78-6916-431c-b3bc-b98b26515842] FINISH, HotUnPlugDiskVDSCommand, return: , log id: 1c39f09a 2021-08-23 10:29:44,471+09 INFO [org.ovirt.engine.core.bll.storage.disk.managedblock.DisconnectManagedBlockStorageDeviceCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2a452b04] Running command: DisconnectManagedBlockStorageDeviceCommand internal: true. 2021-08-23 10:29:44,514+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DetachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2a452b04] START, DetachManagedBlockStorageVolumeVDSCommand(HostName = host, AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vds='Host[host,29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094]'}), log id: 2d6874a5 2021-08-23 10:29:46,683+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DetachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2a452b04] FINISH, DetachManagedBlockStorageVolumeVDSCommand, return: StatusOnlyReturn [status=Status [code=0, message=Done]], log id: 2d6874a5

yes, it should indeed defer to DetachManagedBlockStorageVolumeVDSCommand which is what does the unmapping, do you have an earlier log that shows the XML (for example, when it was attached)? On Mon, Aug 23, 2021 at 10:59 AM <ssarang520@gmail.com> wrote:
There were no error logs in vdsm and supervdsm.
And I found that the [org.ovirt.engine.core.bll.storage.disk.managedblock.DisconnectManagedBlockStorageDeviceCommand] and [org.ovirt.engine.core.vdsbroker.vdsbroker.DetachManagedBlockStorageVolumeVDSCommand] functions are being called when the disk is detached from ovirt vm.
However, in the log I gave first, there is no part where the correspoding functions are called, isn't it a bug?
Here is the engine log where detaching the disk:
2021-08-23 10:29:43,972+09 INFO [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (default task-176) [2538ba78-6916-431c-b3bc-b98b26515842] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f17702e4-ba97-4f95-a6d4-b89de003bd26=DISK]', sharedLocks='[59a7461c-72fe-4e01-86a7-c70243f31596=VM]'}' 2021-08-23 10:29:44,054+09 INFO [org.ovirt.engine.core.bll.storage.disk.HotUnPlugDiskFromVmCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2538ba78-6916-431c-b3bc-b98b26515842] Running command: HotUnPlugDiskFromVmCommand internal: false. Entities affected : ID: 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER 2021-08-23 10:29:44,076+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2538ba78-6916-431c-b3bc-b98b26515842] START, HotUnPlugDiskVDSCommand(HostName = host, HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmId='59a7461c-72fe-4e01-86a7-c70243f31596', diskId='f17702e4-ba97-4f95-a6d4-b89de003bd26'}), log id: 1c39f09a 2021-08-23 10:29:44,078+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2538ba78-6916-431c-b3bc-b98b26515842] Disk hot-unplug: <?xml version="1.0" encoding="UTF-8"?><hotunplug> <devices> <disk> <alias name="ua-f17702e4-ba97-4f95-a6d4-b89de003bd26"/> </disk> </devices> </hotunplug>
2021-08-23 10:29:44,218+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2538ba78-6916-431c-b3bc-b98b26515842] FINISH, HotUnPlugDiskVDSCommand, return: , log id: 1c39f09a 2021-08-23 10:29:44,471+09 INFO [org.ovirt.engine.core.bll.storage.disk.managedblock.DisconnectManagedBlockStorageDeviceCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2a452b04] Running command: DisconnectManagedBlockStorageDeviceCommand internal: true. 2021-08-23 10:29:44,514+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DetachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2a452b04] START, DetachManagedBlockStorageVolumeVDSCommand(HostName = host, AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vds='Host[host,29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094]'}), log id: 2d6874a5 2021-08-23 10:29:46,683+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DetachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-149928) [2a452b04] FINISH, DetachManagedBlockStorageVolumeVDSCommand, return: StatusOnlyReturn [status=Status [code=0, message=Done]], log id: 2d6874a5 _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LL4WSFNCV6EW6D...

I attached a mbs disk to the running vm through the dashboard. Here is the engine log: 2021-08-23 19:00:54,912+09 INFO [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default task-209) [28eaa439-0bce-456d-8931-f1edc74ca71b] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f17702e4-ba97-4f95-a6d4-b89de003bd26=DISK]', sharedLocks='[59a7461c-72fe-4e01-86a7-c70243f31596=VM]'}' 2021-08-23 19:00:54,917+09 INFO [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [28eaa439-0bce-456d-8931-f1edc74ca71b] Running command: HotPlugDiskToVmCommand internal: false. Entities affected : ID: 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER 2021-08-23 19:00:54,922+09 INFO [org.ovirt.engine.core.bll.storage.disk.managedblock.ConnectManagedBlockStorageDeviceCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] Running command: ConnectManagedBlockStorageDeviceCommand internal: true. 2021-08-23 19:00:59,441+09 INFO [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] cinderlib output: {"driver_volume_type": "rbd", "data": {"name": "mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26", "hosts": ["172.22.5.6"], "ports": ["6789"], "cluster_name": "ceph", "auth_enabled": true, "auth_username": "admin", "secret_type": "ceph", "secret_uuid": null, "volume_id": "f17702e4-ba97-4f95-a6d4-b89de003bd26", "discard": true, "keyring": "[client.admin]\n\tkey = AQCjBFhgjRWFOBAAMxEaJ3yffC50GDFWnR43DQ==\n", "access_mode": "rw"}} 2021-08-23 19:00:59,442+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] START, AttachManagedBlockStorageVolumeVDSCommand(HostName = host, AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vds='Host[host,29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094]'}), log id: 5657b4a1 2021-08-23 19:01:02,715+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] FINISH, AttachManagedBlockStorageVolumeVDSCommand, return: {attachment={path=/dev/rbd1, conf=/tmp/brickrbd_it_6m0e4, type=block}, path=/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26, vol_id=f17702e4-ba97-4f95-a6d4-b89de003bd26}, log id: 5657b4a1 2021-08-23 19:01:02,817+09 INFO [org.ovirt.engine.core.bll.storage.disk.managedblock.SaveManagedBlockStorageDiskDeviceCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] Running command: SaveManagedBlockStorageDiskDeviceCommand internal: true. 2021-08-23 19:01:09,072+09 INFO [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] cinderlib output: 2021-08-23 19:01:09,077+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] START, HotPlugDiskVDSCommand(HostName = host, HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmId='59a7461c-72fe-4e01-86a7-c70243f31596', diskId='f17702e4-ba97-4f95-a6d4-b89de003bd26'}), log id: 5acbdc16 2021-08-23 19:01:09,111+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] Disk hot-plug: <?xml version="1.0" encoding="UTF-8"?><hotplug> <devices> <disk snapshot="no" type="block" device="disk"> <target dev="sda" bus="scsi"/> <source dev="/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26"> <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" cache="none"/> <alias name="ua-f17702e4-ba97-4f95-a6d4-b89de003bd26"/> <address bus="0" controller="0" unit="1" type="drive" target="0"/> <serial>f17702e4-ba97-4f95-a6d4-b89de003bd26</serial> </disk> </devices> <metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:vm> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:RBD>/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26</ovirt-vm:RBD> </ovirt-vm:device> </ovirt-vm:vm> </metadata> </hotplug> 2021-08-23 19:01:09,221+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] FINISH, HotPlugDiskVDSCommand, return: , log id: 5acbdc16 2021-08-23 19:01:09,358+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] EVENT_ID: USER_HOTPLUG_DISK(2,000), VM centos disk mbs was plugged by admin@internal-authz. 2021-08-23 19:01:09,358+09 INFO [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] Lock freed to object 'EngineLock:{exclusiveLocks='[f17702e4-ba97-4f95-a6d4-b89de003bd26=DISK]', sharedLocks='[59a7461c-72fe-4e01-86a7-c70243f31596=VM]'}' 2021-08-23 19:01:10,916+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [] START, DumpXmlsVDSCommand(HostName = host, Params:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmIds='[59a7461c-72fe-4e01-86a7-c70243f31596]'}), log id: 22d45ea6 2021-08-23 19:01:10,919+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [] FINISH, DumpXmlsVDSCommand, return: {59a7461c-72fe-4e01-86a7-c70243f31596=<domain type='kvm' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>centos</name> <uuid>59a7461c-72fe-4e01-86a7-c70243f31596</uuid> <metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns1:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:balloonTarget type="int">4194304</ovirt-vm:balloonTarget> <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled> <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1628558564.8754532</ovirt-vm:startTime> <ovirt-vm:device alias="ua-7c9f38e9-8889-46c8-83bb-92efb9272de9" mac_address="56:6f:16:a8:00:07"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>2</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/sda</ovirt-vm:guestName> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> <ovirt-vm:poolID>4ca6e0e8-e3a4-11eb-8830-480fcf63834f</ovirt-vm:poolID> <ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:path> <ovirt-vm:volumeID>4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:path> <ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sdc"> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sdae"> <ovirt-vm:RBD>/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26</ovirt-vm:RBD> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> </ovirt-vm:device> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>16777216</maxMemory> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' current='2'>16</vcpu> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>RHEL</entry> <entry name='version'>8.4-1.2105.el8</entry> <entry name='serial'>83e66af8-0500-11e6-9c43-bc00007c0000</entry> <entry name='uuid'>59a7461c-72fe-4e01-86a7-c70243f31596</entry> <entry name='family'>oVirt</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-q35-rhel8.4.0'>hvm</type> <boot dev='hd'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Skylake-Client-noTSX-IBRS</model> <topology sockets='16' dies='1' cores='1' threads='1'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='md-clear'/> <feature policy='disable' name='mpx'/> <feature policy='require' name='hypervisor'/> <numa> <cell id='0' cpus='0-15' memory='4194304' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='sdc' bus='sata'/> <readonly/> <alias name='ua-df0ac774-3623-4868-8bd3-45c8f2aa3dc4'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads'/> <source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4' index='8'> <seclabel model='dac' relabel='no'/> </source> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> </backingStore> <target dev='sda' bus='scsi'/> <serial>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</serial> <alias name='ua-bee44276-234f-4ed7-8a8a-d90a5e3cb5b3'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26' index='32'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='sdae' bus='scsi'/> <serial>f17702e4-ba97-4f95-a6d4-b89de003bd26</serial> <alias name='ua-f17702e4-ba97-4f95-a6d4-b89de003bd26'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x16'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0x17'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x18'/> <alias name='pci.9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='10' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='10' port='0x19'/> <alias name='pci.10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> </controller> <controller type='pci' index='11' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='11' port='0x1a'/> <alias name='pci.11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> </controller> <controller type='pci' index='12' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='12' port='0x1b'/> <alias name='pci.12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> </controller> <controller type='pci' index='13' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='13' port='0x1c'/> <alias name='pci.13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> </controller> <controller type='pci' index='14' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='14' port='0x1d'/> <alias name='pci.14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> </controller> <controller type='pci' index='15' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='15' port='0x1e'/> <alias name='pci.15'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> </controller> <controller type='pci' index='16' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='16' port='0x1f'/> <alias name='pci.16'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='ua-82c49f93-c4e8-460b-bb7d-95db0e9d87a0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='8'> <alias name='ua-ad56daea-edb1-45c7-a1ab-2a7db3aaeee2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='56:6f:16:a8:00:07'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='2'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-7c9f38e9-8889-46c8-83bb-92efb9272de9'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.7.18' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.7.18' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-799f065a-b2b9-4e37-a502-f86c7cc8dc51'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-c2bfe0b9-065a-46b7-9b0b-ef7e0f699611'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-fa110e6b-5eed-4b4b-93d8-0ac5de08aa2e'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c437,c650</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c437,c650</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> <qemu:capabilities> <qemu:add capability='blockdev'/> <qemu:add capability='incremental-backup'/> </qemu:capabilities> </domain> }, log id: 22d45ea6

this is a correct run, right? the original flow works with this one? On Mon, Aug 23, 2021 at 1:05 PM <ssarang520@gmail.com> wrote:
I attached a mbs disk to the running vm through the dashboard.
Here is the engine log:
2021-08-23 19:00:54,912+09 INFO [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default task-209) [28eaa439-0bce-456d-8931-f1edc74ca71b] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f17702e4-ba97-4f95-a6d4-b89de003bd26=DISK]', sharedLocks='[59a7461c-72fe-4e01-86a7-c70243f31596=VM]'}' 2021-08-23 19:00:54,917+09 INFO [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [28eaa439-0bce-456d-8931-f1edc74ca71b] Running command: HotPlugDiskToVmCommand internal: false. Entities affected : ID: 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER 2021-08-23 19:00:54,922+09 INFO [org.ovirt.engine.core.bll.storage.disk.managedblock.ConnectManagedBlockStorageDeviceCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] Running command: ConnectManagedBlockStorageDeviceCommand internal: true. 2021-08-23 19:00:59,441+09 INFO [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] cinderlib output: {"driver_volume_type": "rbd", "data": {"name": "mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26", "hosts": ["172.22.5.6"], "ports": ["6789"], "cluster_name": "ceph", "auth_enabled": true, "auth_username": "admin", "secret_type": "ceph", "secret_uuid": null, "volume_id": "f17702e4-ba97-4f95-a6d4-b89de003bd26", "discard": true, "keyring": "[client.admin]\n\tkey = AQCjBFhgjRWFOBAAMxEaJ3yffC50GDFWnR43DQ==\n", "access_mode": "rw"}} 2021-08-23 19:00:59,442+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] START, AttachManagedBlockStorageVolumeVDSCommand(HostName = host, AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vds='Host[host,29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094]'}), log id: 5657b4a1 2021-08-23 19:01:02,715+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [373c4bed] FINISH, AttachManagedBlockStorageVolumeVDSCommand, return: {attachment={path=/dev/rbd1, conf=/tmp/brickrbd_it_6m0e4, type=block}, path=/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26, vol_id=f17702e4-ba97-4f95-a6d4-b89de003bd26}, log id: 5657b4a1 2021-08-23 19:01:02,817+09 INFO [org.ovirt.engine.core.bll.storage.disk.managedblock.SaveManagedBlockStorageDiskDeviceCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] Running command: SaveManagedBlockStorageDiskDeviceCommand internal: true. 2021-08-23 19:01:09,072+09 INFO [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] cinderlib output: 2021-08-23 19:01:09,077+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] START, HotPlugDiskVDSCommand(HostName = host, HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmId='59a7461c-72fe-4e01-86a7-c70243f31596', diskId='f17702e4-ba97-4f95-a6d4-b89de003bd26'}), log id: 5acbdc16 2021-08-23 19:01:09,111+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] Disk hot-plug: <?xml version="1.0" encoding="UTF-8"?><hotplug> <devices> <disk snapshot="no" type="block" device="disk"> <target dev="sda" bus="scsi"/> <source dev="/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26"> <seclabel model="dac" type="none" relabel="no"/> </source> <driver name="qemu" cache="none"/> <alias name="ua-f17702e4-ba97-4f95-a6d4-b89de003bd26"/> <address bus="0" controller="0" unit="1" type="drive" target="0"/> <serial>f17702e4-ba97-4f95-a6d4-b89de003bd26</serial> </disk> </devices> <metadata xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:vm> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:RBD>/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26</ovirt-vm:RBD> </ovirt-vm:device> </ovirt-vm:vm> </metadata> </hotplug>
2021-08-23 19:01:09,221+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] FINISH, HotPlugDiskVDSCommand, return: , log id: 5acbdc16 2021-08-23 19:01:09,358+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] EVENT_ID: USER_HOTPLUG_DISK(2,000), VM centos disk mbs was plugged by admin@internal-authz. 2021-08-23 19:01:09,358+09 INFO [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (EE-ManagedThreadFactory-engine-Thread-154035) [50d635e3] Lock freed to object 'EngineLock:{exclusiveLocks='[f17702e4-ba97-4f95-a6d4-b89de003bd26=DISK]', sharedLocks='[59a7461c-72fe-4e01-86a7-c70243f31596=VM]'}' 2021-08-23 19:01:10,916+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [] START, DumpXmlsVDSCommand(HostName = host, Params:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmIds='[59a7461c-72fe-4e01-86a7-c70243f31596]'}), log id: 22d45ea6 2021-08-23 19:01:10,919+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-68) [] FINISH, DumpXmlsVDSCommand, return: {59a7461c-72fe-4e01-86a7-c70243f31596=<domain type='kvm' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>centos</name> <uuid>59a7461c-72fe-4e01-86a7-c70243f31596</uuid> <metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns1:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:balloonTarget type="int">4194304</ovirt-vm:balloonTarget> <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled> <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1628558564.8754532</ovirt-vm:startTime> <ovirt-vm:device alias="ua-7c9f38e9-8889-46c8-83bb-92efb9272de9" mac_address="56:6f:16:a8:00:07"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>2</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda"> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/sda</ovirt-vm:guestName> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> <ovirt-vm:poolID>4ca6e0e8-e3a4-11eb-8830-480fcf63834f</ovirt-vm:poolID> <ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:path> <ovirt-vm:volumeID>4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:path> <ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sdc"> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sdae"> <ovirt-vm:RBD>/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26</ovirt-vm:RBD> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> </ovirt-vm:device> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>16777216</maxMemory> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' current='2'>16</vcpu> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>RHEL</entry> <entry name='version'>8.4-1.2105.el8</entry> <entry name='serial'>83e66af8-0500-11e6-9c43-bc00007c0000</entry> <entry name='uuid'>59a7461c-72fe-4e01-86a7-c70243f31596</entry> <entry name='family'>oVirt</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-q35-rhel8.4.0'>hvm</type> <boot dev='hd'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Skylake-Client-noTSX-IBRS</model> <topology sockets='16' dies='1' cores='1' threads='1'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='md-clear'/> <feature policy='disable' name='mpx'/> <feature policy='require' name='hypervisor'/> <numa> <cell id='0' cpus='0-15' memory='4194304' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='sdc' bus='sata'/> <readonly/> <alias name='ua-df0ac774-3623-4868-8bd3-45c8f2aa3dc4'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads'/> <source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4' index='8'> <seclabel model='dac' relabel='no'/> </source> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> </backingStore> <target dev='sda' bus='scsi'/> <serial>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</serial> <alias name='ua-bee44276-234f-4ed7-8a8a-d90a5e3cb5b3'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/> <source dev='/dev/rbd/mypool/volume-f17702e4-ba97-4f95-a6d4-b89de003bd26' index='32'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='sdae' bus='scsi'/> <serial>f17702e4-ba97-4f95-a6d4-b89de003bd26</serial> <alias name='ua-f17702e4-ba97-4f95-a6d4-b89de003bd26'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x16'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0x17'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x18'/> <alias name='pci.9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='10' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='10' port='0x19'/> <alias name='pci.10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> </controller> <controller type='pci' index='11' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='11' port='0x1a'/> <alias name='pci.11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> </controller> <controller type='pci' index='12' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='12' port='0x1b'/> <alias name='pci.12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> </controller> <controller type='pci' index='13' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='13' port='0x1c'/> <alias name='pci.13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> </controller> <controller type='pci' index='14' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='14' port='0x1d'/> <alias name='pci.14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> </controller> <controller type='pci' index='15' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='15' port='0x1e'/> <alias name='pci.15'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> </controller> <controller type='pci' index='16' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='16' port='0x1f'/> <alias name='pci.16'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='ua-82c49f93-c4e8-460b-bb7d-95db0e9d87a0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='8'> <alias name='ua-ad56daea-edb1-45c7-a1ab-2a7db3aaeee2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='56:6f:16:a8:00:07'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='2'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-7c9f38e9-8889-46c8-83bb-92efb9272de9'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.7.18' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.7.18' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-799f065a-b2b9-4e37-a502-f86c7cc8dc51'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-c2bfe0b9-065a-46b7-9b0b-ef7e0f699611'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-fa110e6b-5eed-4b4b-93d8-0ac5de08aa2e'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c437,c650</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c437,c650</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> <qemu:capabilities> <qemu:add capability='blockdev'/> <qemu:add capability='incremental-backup'/> </qemu:capabilities> </domain> }, log id: 22d45ea6 _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SCCZZH3E4JCG5X...

Yes, that's right. I can attach and detach mbs disk to ovirt vm normally through dashboard.

And the full flow, with CSI? I'm trying to determine whether the CSI driver does something wrong, or something went wrong during that specific run On Mon, Aug 23, 2021 at 2:34 PM <ssarang520@gmail.com> wrote:
Yes, that's right. I can attach and detach mbs disk to ovirt vm normally through dashboard. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/C6Z2TPVX7Z2OBG...

With csi, a pvc is created and a pod is also created by attaching the pvc. And if I delete the pod, /dev/rbd0 connected to the ovirt vm is not released and connection information remains in the DB. So I cannot delete the pvc and cannot attach the pvc again.

And also you can't migrate your pvc between pools. I was create ticket for that [1] [1] https://bugzilla.redhat.com/show_bug.cgi?id=1997241 k
On 24 Aug 2021, at 04:13, ssarang520@gmail.com wrote:
With csi, a pvc is created and a pod is also created by attaching the pvc.
And if I delete the pod, /dev/rbd0 connected to the ovirt vm is not released and connection information remains in the DB. So I cannot delete the pvc and cannot attach the pvc again.

Hey there, The CSI driver in this repository is built with OpenShift in mind, and does not have an upstream that is intended to work with vanilla Kubernetes. Even if you get it to work now, it may break in the future. We have had some discussions around reviving the upstream for the CSI driver, which is located here: https://github.com/oVirt/csi-driver We are also doing a fair bit of work which will make the process easier here: https://github.com/oVirt/go-ovirt-client However, at this time we haven't made any progress that would be useful to you. With that in mind, I would recommend reporting possible bugs either on GitHub or on Bugzilla, that way they will reach us quicker, even if we officially don't support vanilla Kubernetes. Cheers, Janos On Mon, Aug 23, 2021 at 12:57 AM <ssarang520@gmail.com> wrote:
Hi all,
I deployed ovirt-csi in the k8s by applying yaml manually. I used the latest version of the container image. (https://github.com/openshift/ovirt-csi-driver-operator/tree/master/assets )
After successfully creating pvc and pod, I tried to delete it. And the pod is deleted, but the pvc is not deleted. This is because deleting a pod does not unmap /dev/rbd0 attached to the ovirt vm.
How can I delete the pvc successfully?
oVirt engine version is 4.4.7.6-1.el8. Here is the engine log when deleting the pod:
2021-08-20 17:40:35,385+09 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2021-08-20 17:40:35,403+09 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) [68ee3182] Running command: CreateUserSessionCommand internal: false. 2021-08-20 17:40:35,517+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [68ee3182] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '192.168.7.169' using session 'XfDgNkmAGnPiZahK5itLhHQTCNHZ3JwXMMzOiZrYL3C32+1TTys3xcjrAmCIKPu02hgN1sdVpfZXWd0FznaPCQ==' logged in. 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,663+09 INFO [org.ovirt.engine.core.bll.storage.disk.DetachDiskFromVmCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Running command: DetachDiskFromVmCommand internal: false. Entities affected : ID: 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER 2021-08-20 17:40:35,664+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] START, HotUnPlugDiskVDSCommand(HostName = host, HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmId='59a7461c-72fe-4e01-86a7-c70243f31596', diskId='63a64445-1659-4d5f-8847-e7266e64b09e'}), log id: 506ff4a4 2021-08-20 17:40:35,678+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Disk hot-unplug: <?xml version="1.0" encoding="UTF-8"?><hotunplug> <devices> <disk> <alias name="ua-63a64445-1659-4d5f-8847-e7266e64b09e"/> </disk> </devices> </hotunplug>
2021-08-20 17:40:35,749+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] FINISH, HotUnPlugDiskVDSCommand, return: , log id: 506ff4a4 2021-08-20 17:40:35,842+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] EVENT_ID: USER_DETACH_DISK_FROM_VM(2,018), Disk pvc-9845a0ff-e94c-497c-8c65-fc6a1e26db20 was successfully detached from VM centos by admin@internal-authz. 2021-08-20 17:40:35,916+09 ERROR [org.ovirt.engine.core.sso.service.SsoService] (default task-150) [] OAuthException invalid_grant: The provided authorization grant for the auth code has expired. 2021-08-20 17:40:35,917+09 ERROR [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default task-149) [] Cannot authenticate using authentication Headers: invalid_grant: The provided authorization grant for the auth code has expired. 2021-08-20 17:40:36,029+09 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2021-08-20 17:40:36,046+09 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) [4c4bf441] Running command: CreateUserSessionCommand internal: false. 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:49,241+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] START, DumpXmlsVDSCommand(HostName = host, Params:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmIds='[59a7461c-72fe-4e01-86a7-c70243f31596]'}), log id: 7eb54202 2021-08-20 17:40:49,244+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] FINISH, DumpXmlsVDSCommand, return: {59a7461c-72fe-4e01-86a7-c70243f31596=<domain type='kvm' id='1' xmlns:qemu=' http://libvirt.org/schemas/domain/qemu/1.0'> <name>centos</name> <uuid>59a7461c-72fe-4e01-86a7-c70243f31596</uuid> <metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm=" http://ovirt.org/vm/1.0"> <ns1:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:balloonTarget type="int">4194304</ovirt-vm:balloonTarget> <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled> <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1628558564.8754532</ovirt-vm:startTime> <ovirt-vm:device alias="ua-7c9f38e9-8889-46c8-83bb-92efb9272de9" mac_address="56:6f:16:a8:00:07"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>2</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/sda</ovirt-vm:guestName>
<ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:managed type="bool">False</ovirt-vm:managed>
<ovirt-vm:poolID>4ca6e0e8-e3a4-11eb-8830-480fcf63834f</ovirt-vm:poolID>
<ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID>
<ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18: _home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18: _home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:path>
<ovirt-vm:volumeID>4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> <ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID>
<ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18: _home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18: _home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:path>
<ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sdc"> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> </ovirt-vm:device> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>16777216</maxMemory> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' current='2'>16</vcpu> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>RHEL</entry> <entry name='version'>8.4-1.2105.el8</entry> <entry name='serial'>83e66af8-0500-11e6-9c43-bc00007c0000</entry> <entry name='uuid'>59a7461c-72fe-4e01-86a7-c70243f31596</entry> <entry name='family'>oVirt</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-q35-rhel8.4.0'>hvm</type> <boot dev='hd'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Skylake-Client-noTSX-IBRS</model> <topology sockets='16' dies='1' cores='1' threads='1'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='md-clear'/> <feature policy='disable' name='mpx'/> <feature policy='require' name='hypervisor'/> <numa> <cell id='0' cpus='0-15' memory='4194304' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='sdc' bus='sata'/> <readonly/> <alias name='ua-df0ac774-3623-4868-8bd3-45c8f2aa3dc4'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads'/> <source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4' index='8'> <seclabel model='dac' relabel='no'/> </source> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/rhev/data-center/mnt/192.168.7.18: _home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> </backingStore> <target dev='sda' bus='scsi'/> <serial>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</serial> <alias name='ua-bee44276-234f-4ed7-8a8a-d90a5e3cb5b3'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x16'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0x17'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x18'/> <alias name='pci.9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='10' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='10' port='0x19'/> <alias name='pci.10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> </controller> <controller type='pci' index='11' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='11' port='0x1a'/> <alias name='pci.11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> </controller> <controller type='pci' index='12' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='12' port='0x1b'/> <alias name='pci.12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> </controller> <controller type='pci' index='13' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='13' port='0x1c'/> <alias name='pci.13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> </controller> <controller type='pci' index='14' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='14' port='0x1d'/> <alias name='pci.14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> </controller> <controller type='pci' index='15' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='15' port='0x1e'/> <alias name='pci.15'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> </controller> <controller type='pci' index='16' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='16' port='0x1f'/> <alias name='pci.16'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='ua-82c49f93-c4e8-460b-bb7d-95db0e9d87a0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='8'> <alias name='ua-ad56daea-edb1-45c7-a1ab-2a7db3aaeee2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='56:6f:16:a8:00:07'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='2'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-7c9f38e9-8889-46c8-83bb-92efb9272de9'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.7.18' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.7.18' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-799f065a-b2b9-4e37-a502-f86c7cc8dc51'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-c2bfe0b9-065a-46b7-9b0b-ef7e0f699611'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-fa110e6b-5eed-4b4b-93d8-0ac5de08aa2e'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c437,c650</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c437,c650</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> <qemu:capabilities> <qemu:add capability='blockdev'/> <qemu:add capability='incremental-backup'/> </qemu:capabilities> </domain> }, log id: 7eb54202
Here is the engine log when deleting the pvc:
2021-08-20 17:43:12,964 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties() 2021-08-20 17:43:12,990 - cinderlib-client - INFO - Deleting volume '63a64445-1659-4d5f-8847-e7266e64b09e' [feefc62f-e7cb-435d-ae21-4b52b53fbdfa] 2021-08-20 17:43:28,856 - cinder.volume.drivers.rbd - WARNING - ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. 2021-08-20 17:43:28,900 - cinderlib-client - ERROR - Failure occurred when trying to run command 'delete_volume': ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. [feefc62f-e7cb-435d-ae21-4b52b53fbdfa] 2021-08-20 17:43:28,901 - cinder - CRITICAL - Unhandled error Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1195, in delete_volume _try_remove_volume(client, volume_name) File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 683, in _wrapper return r.call(f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 409, in call do = self.iter(retry_state=retry_state) File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 368, in iter raise retry_exc.reraise() File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 186, in reraise raise self.last_attempt.result() File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 425, in result return self.__get_result() File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result raise self._exception File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 412, in call result = fn(*args, **kwargs) File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1190, in _try_remove_volume self.RBDProxy().remove(client.ioctx, volume_name) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call rv = execute(f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute six.reraise(c, e, tb) File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker rv = meth(*args, **kwargs) File "rbd.pyx", line 767, in rbd.RBD.remove rbd.ImageBusy: [errno 16] RBD image is busy (error removing image)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "./cinderlib-client.py", line 170, in main args.command(args) File "./cinderlib-client.py", line 218, in delete_volume vol.delete() File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 494, in delete self._raise_with_resource() File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 222, in _raise_with_resource six.reraise(*exc_info) File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 487, in delete self.backend.driver.delete_volume(self._ovo) File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1205, in delete_volume raise exception.VolumeIsBusy(msg, volume_name=volume_name) cinder.exception.VolumeIsBusy: ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "./cinderlib-client.py", line 390, in <module> sys.exit(main(sys.argv[1:])) File "./cinderlib-client.py", line 176, in main sys.stderr.write(traceback.format_exc(e)) File "/usr/lib64/python3.6/traceback.py", line 167, in format_exc return "".join(format_exception(*sys.exc_info(), limit=limit, chain=chain)) File "/usr/lib64/python3.6/traceback.py", line 121, in format_exception type(value), value, tb, limit=limit).format(chain=chain)) File "/usr/lib64/python3.6/traceback.py", line 498, in __init__ _seen=_seen) File "/usr/lib64/python3.6/traceback.py", line 509, in __init__ capture_locals=capture_locals) File "/usr/lib64/python3.6/traceback.py", line 338, in extract if limit >= 0: TypeError: '>=' not supported between instances of 'VolumeIsBusy' and 'int' _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFYZX2HJZ3RBJS...

Hey there, The CSI driver in this repository is built with OpenShift in mind, and does not have an upstream that is intended to work with vanilla Kubernetes. Even if you get it to work now, it may break in the future. We have had some discussions around reviving the upstream for the CSI driver, which is located here: https://github.com/oVirt/csi-driver We are also doing a fair bit of work which will make the process easier here: https://github.com/oVirt/go-ovirt-client However, at this time we haven't made any progress that would be useful to you. With that in mind, I would recommend reporting possible bugs either on GitHub or on Bugzilla, that way they will reach us quicker, even if we officially don't support vanilla Kubernetes. Cheers, Janos On Mon, Aug 23, 2021 at 12:57 AM <ssarang520@gmail.com> wrote:
Hi all,
I deployed ovirt-csi in the k8s by applying yaml manually. I used the latest version of the container image. (https://github.com/openshift/ovirt-csi-driver-operator/tree/master/assets )
After successfully creating pvc and pod, I tried to delete it. And the pod is deleted, but the pvc is not deleted. This is because deleting a pod does not unmap /dev/rbd0 attached to the ovirt vm.
How can I delete the pvc successfully?
oVirt engine version is 4.4.7.6-1.el8. Here is the engine log when deleting the pod:
2021-08-20 17:40:35,385+09 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2021-08-20 17:40:35,403+09 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) [68ee3182] Running command: CreateUserSessionCommand internal: false. 2021-08-20 17:40:35,517+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [68ee3182] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '192.168.7.169' using session 'XfDgNkmAGnPiZahK5itLhHQTCNHZ3JwXMMzOiZrYL3C32+1TTys3xcjrAmCIKPu02hgN1sdVpfZXWd0FznaPCQ==' logged in. 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:35,663+09 INFO [org.ovirt.engine.core.bll.storage.disk.DetachDiskFromVmCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Running command: DetachDiskFromVmCommand internal: false. Entities affected : ID: 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER 2021-08-20 17:40:35,664+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] START, HotUnPlugDiskVDSCommand(HostName = host, HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmId='59a7461c-72fe-4e01-86a7-c70243f31596', diskId='63a64445-1659-4d5f-8847-e7266e64b09e'}), log id: 506ff4a4 2021-08-20 17:40:35,678+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Disk hot-unplug: <?xml version="1.0" encoding="UTF-8"?><hotunplug> <devices> <disk> <alias name="ua-63a64445-1659-4d5f-8847-e7266e64b09e"/> </disk> </devices> </hotunplug>
2021-08-20 17:40:35,749+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] FINISH, HotUnPlugDiskVDSCommand, return: , log id: 506ff4a4 2021-08-20 17:40:35,842+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] EVENT_ID: USER_DETACH_DISK_FROM_VM(2,018), Disk pvc-9845a0ff-e94c-497c-8c65-fc6a1e26db20 was successfully detached from VM centos by admin@internal-authz. 2021-08-20 17:40:35,916+09 ERROR [org.ovirt.engine.core.sso.service.SsoService] (default task-150) [] OAuthException invalid_grant: The provided authorization grant for the auth code has expired. 2021-08-20 17:40:35,917+09 ERROR [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default task-149) [] Cannot authenticate using authentication Headers: invalid_grant: The provided authorization grant for the auth code has expired. 2021-08-20 17:40:36,029+09 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2021-08-20 17:40:36,046+09 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) [4c4bf441] Running command: CreateUserSessionCommand internal: false. 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null 2021-08-20 17:40:49,241+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] START, DumpXmlsVDSCommand(HostName = host, Params:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmIds='[59a7461c-72fe-4e01-86a7-c70243f31596]'}), log id: 7eb54202 2021-08-20 17:40:49,244+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] FINISH, DumpXmlsVDSCommand, return: {59a7461c-72fe-4e01-86a7-c70243f31596=<domain type='kvm' id='1' xmlns:qemu=' http://libvirt.org/schemas/domain/qemu/1.0'> <name>centos</name> <uuid>59a7461c-72fe-4e01-86a7-c70243f31596</uuid> <metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm=" http://ovirt.org/vm/1.0"> <ns1:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:balloonTarget type="int">4194304</ovirt-vm:balloonTarget> <ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled> <ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1628558564.8754532</ovirt-vm:startTime> <ovirt-vm:device alias="ua-7c9f38e9-8889-46c8-83bb-92efb9272de9" mac_address="56:6f:16:a8:00:07"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>2</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/sda</ovirt-vm:guestName>
<ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:managed type="bool">False</ovirt-vm:managed>
<ovirt-vm:poolID>4ca6e0e8-e3a4-11eb-8830-480fcf63834f</ovirt-vm:poolID>
<ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID>
<ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18: _home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18: _home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:path>
<ovirt-vm:volumeID>4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> <ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID>
<ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18: _home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4.lease</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18: _home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:path>
<ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="sdc"> <ovirt-vm:managed type="bool">False</ovirt-vm:managed> </ovirt-vm:device> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>16777216</maxMemory> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' current='2'>16</vcpu> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>RHEL</entry> <entry name='version'>8.4-1.2105.el8</entry> <entry name='serial'>83e66af8-0500-11e6-9c43-bc00007c0000</entry> <entry name='uuid'>59a7461c-72fe-4e01-86a7-c70243f31596</entry> <entry name='family'>oVirt</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-q35-rhel8.4.0'>hvm</type> <boot dev='hd'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Skylake-Client-noTSX-IBRS</model> <topology sockets='16' dies='1' cores='1' threads='1'/> <feature policy='require' name='ssbd'/> <feature policy='require' name='md-clear'/> <feature policy='disable' name='mpx'/> <feature policy='require' name='hypervisor'/> <numa> <cell id='0' cpus='0-15' memory='4194304' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='sdc' bus='sata'/> <readonly/> <alias name='ua-df0ac774-3623-4868-8bd3-45c8f2aa3dc4'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads'/> <source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4' index='8'> <seclabel model='dac' relabel='no'/> </source> <backingStore type='file' index='1'> <format type='qcow2'/> <source file='/rhev/data-center/mnt/192.168.7.18: _home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> </backingStore> <target dev='sda' bus='scsi'/> <serial>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</serial> <alias name='ua-bee44276-234f-4ed7-8a8a-d90a5e3cb5b3'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x16'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0x17'/> <alias name='pci.8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x18'/> <alias name='pci.9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='10' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='10' port='0x19'/> <alias name='pci.10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> </controller> <controller type='pci' index='11' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='11' port='0x1a'/> <alias name='pci.11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> </controller> <controller type='pci' index='12' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='12' port='0x1b'/> <alias name='pci.12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> </controller> <controller type='pci' index='13' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='13' port='0x1c'/> <alias name='pci.13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/> </controller> <controller type='pci' index='14' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='14' port='0x1d'/> <alias name='pci.14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/> </controller> <controller type='pci' index='15' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='15' port='0x1e'/> <alias name='pci.15'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/> </controller> <controller type='pci' index='16' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='16' port='0x1f'/> <alias name='pci.16'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='ua-82c49f93-c4e8-460b-bb7d-95db0e9d87a0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='8'> <alias name='ua-ad56daea-edb1-45c7-a1ab-2a7db3aaeee2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='56:6f:16:a8:00:07'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='2'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-7c9f38e9-8889-46c8-83bb-92efb9272de9'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.7.18' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.7.18' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <video> <model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-799f065a-b2b9-4e37-a502-f86c7cc8dc51'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-c2bfe0b9-065a-46b7-9b0b-ef7e0f699611'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-fa110e6b-5eed-4b4b-93d8-0ac5de08aa2e'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c437,c650</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c437,c650</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> <qemu:capabilities> <qemu:add capability='blockdev'/> <qemu:add capability='incremental-backup'/> </qemu:capabilities> </domain> }, log id: 7eb54202
Here is the engine log when deleting the pvc:
2021-08-20 17:43:12,964 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties() 2021-08-20 17:43:12,990 - cinderlib-client - INFO - Deleting volume '63a64445-1659-4d5f-8847-e7266e64b09e' [feefc62f-e7cb-435d-ae21-4b52b53fbdfa] 2021-08-20 17:43:28,856 - cinder.volume.drivers.rbd - WARNING - ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. 2021-08-20 17:43:28,900 - cinderlib-client - ERROR - Failure occurred when trying to run command 'delete_volume': ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. [feefc62f-e7cb-435d-ae21-4b52b53fbdfa] 2021-08-20 17:43:28,901 - cinder - CRITICAL - Unhandled error Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1195, in delete_volume _try_remove_volume(client, volume_name) File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 683, in _wrapper return r.call(f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 409, in call do = self.iter(retry_state=retry_state) File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 368, in iter raise retry_exc.reraise() File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 186, in reraise raise self.last_attempt.result() File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 425, in result return self.__get_result() File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result raise self._exception File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 412, in call result = fn(*args, **kwargs) File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1190, in _try_remove_volume self.RBDProxy().remove(client.ioctx, volume_name) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call rv = execute(f, *args, **kwargs) File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute six.reraise(c, e, tb) File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker rv = meth(*args, **kwargs) File "rbd.pyx", line 767, in rbd.RBD.remove rbd.ImageBusy: [errno 16] RBD image is busy (error removing image)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "./cinderlib-client.py", line 170, in main args.command(args) File "./cinderlib-client.py", line 218, in delete_volume vol.delete() File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 494, in delete self._raise_with_resource() File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 222, in _raise_with_resource six.reraise(*exc_info) File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise raise value File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 487, in delete self.backend.driver.delete_volume(self._ovo) File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1205, in delete_volume raise exception.VolumeIsBusy(msg, volume_name=volume_name) cinder.exception.VolumeIsBusy: ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "./cinderlib-client.py", line 390, in <module> sys.exit(main(sys.argv[1:])) File "./cinderlib-client.py", line 176, in main sys.stderr.write(traceback.format_exc(e)) File "/usr/lib64/python3.6/traceback.py", line 167, in format_exc return "".join(format_exception(*sys.exc_info(), limit=limit, chain=chain)) File "/usr/lib64/python3.6/traceback.py", line 121, in format_exception type(value), value, tb, limit=limit).format(chain=chain)) File "/usr/lib64/python3.6/traceback.py", line 498, in __init__ _seen=_seen) File "/usr/lib64/python3.6/traceback.py", line 509, in __init__ capture_locals=capture_locals) File "/usr/lib64/python3.6/traceback.py", line 338, in extract if limit >= 0: TypeError: '>=' not supported between instances of 'VolumeIsBusy' and 'int' _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AFYZX2HJZ3RBJS...
participants (4)
-
Benny Zlotnik
-
Janos Pasztor
-
Konstantin Shalygin
-
ssarang520@gmail.com