Cannot delete pvc attached to pod using ovirt-csi in kubernetes
by ssarang520@gmail.com
Hi all,
I deployed ovirt-csi in the k8s by applying yaml manually. I used the latest version of the container image.
(https://github.com/openshift/ovirt-csi-driver-operator/tree/master/assets)
After successfully creating pvc and pod, I tried to delete it.
And the pod is deleted, but the pvc is not deleted. This is because deleting a pod does not unmap /dev/rbd0 attached to the ovirt vm.
How can I delete the pvc successfully?
oVirt engine version is 4.4.7.6-1.el8.
Here is the engine log when deleting the pod:
2021-08-20 17:40:35,385+09 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access
2021-08-20 17:40:35,403+09 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) [68ee3182] Running command: CreateUserSessionCommand internal: false.
2021-08-20 17:40:35,517+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [68ee3182] EVENT_ID: USER_VDC_LOGIN(30), User admin@internal-authz connecting from '192.168.7.169' using session 'XfDgNkmAGnPiZahK5itLhHQTCNHZ3JwXMMzOiZrYL3C32+1TTys3xcjrAmCIKPu02hgN1sdVpfZXWd0FznaPCQ==' logged in.
2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null
2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null
2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null
2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null
2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null
2021-08-20 17:40:35,520+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null
2021-08-20 17:40:35,663+09 INFO [org.ovirt.engine.core.bll.storage.disk.DetachDiskFromVmCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Running command: DetachDiskFromVmCommand internal: false. Entities affected : ID: 59a7461c-72fe-4e01-86a7-c70243f31596 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER
2021-08-20 17:40:35,664+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] START, HotUnPlugDiskVDSCommand(HostName = host, HotPlugDiskVDSParameters:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmId='59a7461c-72fe-4e01-86a7-c70243f31596', diskId='63a64445-1659-4d5f-8847-e7266e64b09e'}), log id: 506ff4a4
2021-08-20 17:40:35,678+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] Disk hot-unplug: <?xml version="1.0" encoding="UTF-8"?><hotunplug>
<devices>
<disk>
<alias name="ua-63a64445-1659-4d5f-8847-e7266e64b09e"/>
</disk>
</devices>
</hotunplug>
2021-08-20 17:40:35,749+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnPlugDiskVDSCommand] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] FINISH, HotUnPlugDiskVDSCommand, return: , log id: 506ff4a4
2021-08-20 17:40:35,842+09 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-149) [198e2dc9-b908-474e-a395-0fe682c29af0] EVENT_ID: USER_DETACH_DISK_FROM_VM(2,018), Disk pvc-9845a0ff-e94c-497c-8c65-fc6a1e26db20 was successfully detached from VM centos by admin@internal-authz.
2021-08-20 17:40:35,916+09 ERROR [org.ovirt.engine.core.sso.service.SsoService] (default task-150) [] OAuthException invalid_grant: The provided authorization grant for the auth code has expired.
2021-08-20 17:40:35,917+09 ERROR [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default task-149) [] Cannot authenticate using authentication Headers: invalid_grant: The provided authorization grant for the auth code has expired.
2021-08-20 17:40:36,029+09 INFO [org.ovirt.engine.core.sso.service.AuthenticationService] (default task-149) [] User admin@internal-authz with profile [internal] successfully logged in with scopes: ovirt-app-api ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access
2021-08-20 17:40:36,046+09 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-149) [4c4bf441] Running command: CreateUserSessionCommand internal: false.
2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null
2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null
2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null
2021-08-20 17:40:36,114+09 WARN [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-149) [] Can't find relative path for class "org.ovirt.engine.api.resource.StorageDomainVmDiskAttachmentsResource", will return null
2021-08-20 17:40:49,241+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] START, DumpXmlsVDSCommand(HostName = host, Params:{hostId='29dc5d53-7ec5-4a38-aaf1-c6eaf32b0094', vmIds='[59a7461c-72fe-4e01-86a7-c70243f31596]'}), log id: 7eb54202
2021-08-20 17:40:49,244+09 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-27) [] FINISH, DumpXmlsVDSCommand, return: {59a7461c-72fe-4e01-86a7-c70243f31596=<domain type='kvm' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
<name>centos</name>
<uuid>59a7461c-72fe-4e01-86a7-c70243f31596</uuid>
<metadata xmlns:ns1="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ns1:qos/>
<ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:balloonTarget type="int">4194304</ovirt-vm:balloonTarget>
<ovirt-vm:ballooningEnabled>true</ovirt-vm:ballooningEnabled>
<ovirt-vm:clusterVersion>4.6</ovirt-vm:clusterVersion>
<ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:memGuaranteedSize type="int">4096</ovirt-vm:memGuaranteedSize>
<ovirt-vm:minGuaranteedMemoryMb type="int">4096</ovirt-vm:minGuaranteedMemoryMb>
<ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
<ovirt-vm:startTime type="float">1628558564.8754532</ovirt-vm:startTime>
<ovirt-vm:device alias="ua-7c9f38e9-8889-46c8-83bb-92efb9272de9" mac_address="56:6f:16:a8:00:07">
<ovirt-vm:network>ovirtmgmt</ovirt-vm:network>
<ovirt-vm:custom>
<ovirt-vm:queues>2</ovirt-vm:queues>
</ovirt-vm:custom>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID>
<ovirt-vm:guestName>/dev/sda</ovirt-vm:guestName>
<ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID>
<ovirt-vm:managed type="bool">False</ovirt-vm:managed>
<ovirt-vm:poolID>4ca6e0e8-e3a4-11eb-8830-480fcf63834f</ovirt-vm:poolID>
<ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID>
<ovirt-vm:volumeChain>
<ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID>
<ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID>
<ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
<ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003.lease</ovirt-vm:leasePath>
<ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:path>
<ovirt-vm:volumeID>4425404b-17f0-4519-812f-b09a952a9003</ovirt-vm:volumeID>
</ovirt-vm:volumeChainNode>
<ovirt-vm:volumeChainNode>
<ovirt-vm:domainID>6ce3b498-532e-4dc0-9e22-15d0bb24166a</ovirt-vm:domainID>
<ovirt-vm:imageID>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</ovirt-vm:imageID>
<ovirt-vm:leaseOffset type="int">0</ovirt-vm:leaseOffset>
<ovirt-vm:leasePath>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4.lease</ovirt-vm:leasePath>
<ovirt-vm:path>/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:path>
<ovirt-vm:volumeID>a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4</ovirt-vm:volumeID>
</ovirt-vm:volumeChainNode>
</ovirt-vm:volumeChain>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="sdc">
<ovirt-vm:managed type="bool">False</ovirt-vm:managed>
</ovirt-vm:device>
</ovirt-vm:vm>
</metadata>
<maxMemory slots='16' unit='KiB'>16777216</maxMemory>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static' current='2'>16</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>oVirt</entry>
<entry name='product'>RHEL</entry>
<entry name='version'>8.4-1.2105.el8</entry>
<entry name='serial'>83e66af8-0500-11e6-9c43-bc00007c0000</entry>
<entry name='uuid'>59a7461c-72fe-4e01-86a7-c70243f31596</entry>
<entry name='family'>oVirt</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-q35-rhel8.4.0'>hvm</type>
<boot dev='hd'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Skylake-Client-noTSX-IBRS</model>
<topology sockets='16' dies='1' cores='1' threads='1'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='md-clear'/>
<feature policy='disable' name='mpx'/>
<feature policy='require' name='hypervisor'/>
<numa>
<cell id='0' cpus='0-15' memory='4194304' unit='KiB'/>
</numa>
</cpu>
<clock offset='variable' adjustment='0' basis='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' error_policy='report'/>
<source startupPolicy='optional'/>
<target dev='sdc' bus='sata'/>
<readonly/>
<alias name='ua-df0ac774-3623-4868-8bd3-45c8f2aa3dc4'/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='threads'/>
<source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/a93c6e11-6c7c-4efb-8c93-4ed5712ce7e4' index='8'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore type='file' index='1'>
<format type='qcow2'/>
<source file='/rhev/data-center/mnt/192.168.7.18:_home_tmax_nfs/6ce3b498-532e-4dc0-9e22-15d0bb24166a/images/bee44276-234f-4ed7-8a8a-d90a5e3cb5b3/4425404b-17f0-4519-812f-b09a952a9003'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
</backingStore>
<target dev='sda' bus='scsi'/>
<serial>bee44276-234f-4ed7-8a8a-d90a5e3cb5b3</serial>
<alias name='ua-bee44276-234f-4ed7-8a8a-d90a5e3cb5b3'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<alias name='pci.1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<alias name='pci.2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<alias name='pci.3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<alias name='pci.4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<alias name='pci.5'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0x15'/>
<alias name='pci.6'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0x16'/>
<alias name='pci.7'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='8' port='0x17'/>
<alias name='pci.8'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
</controller>
<controller type='pci' index='9' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='9' port='0x18'/>
<alias name='pci.9'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='10' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='10' port='0x19'/>
<alias name='pci.10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
</controller>
<controller type='pci' index='11' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='11' port='0x1a'/>
<alias name='pci.11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
</controller>
<controller type='pci' index='12' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='12' port='0x1b'/>
<alias name='pci.12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
</controller>
<controller type='pci' index='13' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='13' port='0x1c'/>
<alias name='pci.13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
</controller>
<controller type='pci' index='14' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='14' port='0x1d'/>
<alias name='pci.14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
</controller>
<controller type='pci' index='15' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='15' port='0x1e'/>
<alias name='pci.15'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
</controller>
<controller type='pci' index='16' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='16' port='0x1f'/>
<alias name='pci.16'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
</controller>
<controller type='pci' index='0' model='pcie-root'>
<alias name='pcie.0'/>
</controller>
<controller type='sata' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<controller type='virtio-serial' index='0' ports='16'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</controller>
<controller type='scsi' index='0' model='virtio-scsi'>
<alias name='ua-82c49f93-c4e8-460b-bb7d-95db0e9d87a0'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</controller>
<controller type='usb' index='0' model='qemu-xhci' ports='8'>
<alias name='ua-ad56daea-edb1-45c7-a1ab-2a7db3aaeee2'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='56:6f:16:a8:00:07'/>
<source bridge='ovirtmgmt'/>
<target dev='vnet0'/>
<model type='virtio'/>
<driver name='vhost' queues='2'/>
<filterref filter='vdsm-no-mac-spoofing'/>
<link state='up'/>
<mtu size='1500'/>
<alias name='ua-7c9f38e9-8889-46c8-83bb-92efb9272de9'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.ovirt-guest-agent.0'/>
<target type='virtio' name='ovirt-guest-agent.0' state='disconnected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channels/59a7461c-72fe-4e01-86a7-c70243f31596.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
<alias name='channel1'/>
<address type='virtio-serial' controller='0' bus='0' port='2'/>
</channel>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
<alias name='channel2'/>
<address type='virtio-serial' controller='0' bus='0' port='3'/>
</channel>
<input type='tablet' bus='usb'>
<alias name='input0'/>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'>
<alias name='input1'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input2'/>
</input>
<graphics type='vnc' port='5900' autoport='yes' listen='192.168.7.18' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'>
<listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/>
</graphics>
<graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.7.18' passwdValidTo='1970-01-01T00:00:01'>
<listen type='network' address='192.168.7.18' network='vdsm-ovirtmgmt'/>
<channel name='main' mode='secure'/>
<channel name='display' mode='secure'/>
<channel name='inputs' mode='secure'/>
<channel name='cursor' mode='secure'/>
<channel name='playback' mode='secure'/>
<channel name='record' mode='secure'/>
<channel name='smartcard' mode='secure'/>
<channel name='usbredir' mode='secure'/>
</graphics>
<video>
<model type='qxl' ram='65536' vram='8192' vgamem='16384' heads='1' primary='yes'/>
<alias name='ua-799f065a-b2b9-4e37-a502-f86c7cc8dc51'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</video>
<memballoon model='virtio'>
<stats period='5'/>
<alias name='ua-c2bfe0b9-065a-46b7-9b0b-ef7e0f699611'/>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<alias name='ua-fa110e6b-5eed-4b4b-93d8-0ac5de08aa2e'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</rng>
</devices>
<seclabel type='dynamic' model='selinux' relabel='yes'>
<label>system_u:system_r:svirt_t:s0:c437,c650</label>
<imagelabel>system_u:object_r:svirt_image_t:s0:c437,c650</imagelabel>
</seclabel>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
<qemu:capabilities>
<qemu:add capability='blockdev'/>
<qemu:add capability='incremental-backup'/>
</qemu:capabilities>
</domain>
}, log id: 7eb54202
Here is the engine log when deleting the pvc:
2021-08-20 17:43:12,964 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties()
2021-08-20 17:43:12,990 - cinderlib-client - INFO - Deleting volume '63a64445-1659-4d5f-8847-e7266e64b09e' [feefc62f-e7cb-435d-ae21-4b52b53fbdfa]
2021-08-20 17:43:28,856 - cinder.volume.drivers.rbd - WARNING - ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed.
2021-08-20 17:43:28,900 - cinderlib-client - ERROR - Failure occurred when trying to run command 'delete_volume': ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed. [feefc62f-e7cb-435d-ae21-4b52b53fbdfa]
2021-08-20 17:43:28,901 - cinder - CRITICAL - Unhandled error
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1195, in delete_volume
_try_remove_volume(client, volume_name)
File "/usr/lib/python3.6/site-packages/cinder/utils.py", line 683, in _wrapper
return r.call(f, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 409, in call
do = self.iter(retry_state=retry_state)
File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 368, in iter
raise retry_exc.reraise()
File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 186, in reraise
raise self.last_attempt.result()
File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 425, in result
return self.__get_result()
File "/usr/lib64/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/lib/python3.6/site-packages/tenacity/__init__.py", line 412, in call
result = fn(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1190, in _try_remove_volume
self.RBDProxy().remove(client.ioctx, volume_name)
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call
rv = execute(f, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute
six.reraise(c, e, tb)
File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker
rv = meth(*args, **kwargs)
File "rbd.pyx", line 767, in rbd.RBD.remove
rbd.ImageBusy: [errno 16] RBD image is busy (error removing image)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./cinderlib-client.py", line 170, in main
args.command(args)
File "./cinderlib-client.py", line 218, in delete_volume
vol.delete()
File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 494, in delete
self._raise_with_resource()
File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 222, in _raise_with_resource
six.reraise(*exc_info)
File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/cinderlib/objects.py", line 487, in delete
self.backend.driver.delete_volume(self._ovo)
File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 1205, in delete_volume
raise exception.VolumeIsBusy(msg, volume_name=volume_name)
cinder.exception.VolumeIsBusy: ImageBusy error raised while deleting rbd volume. This may have been caused by a connection from a client that has crashed and, if so, may be resolved by retrying the delete after 30 seconds has elapsed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./cinderlib-client.py", line 390, in <module>
sys.exit(main(sys.argv[1:]))
File "./cinderlib-client.py", line 176, in main
sys.stderr.write(traceback.format_exc(e))
File "/usr/lib64/python3.6/traceback.py", line 167, in format_exc
return "".join(format_exception(*sys.exc_info(), limit=limit, chain=chain))
File "/usr/lib64/python3.6/traceback.py", line 121, in format_exception
type(value), value, tb, limit=limit).format(chain=chain))
File "/usr/lib64/python3.6/traceback.py", line 498, in __init__
_seen=_seen)
File "/usr/lib64/python3.6/traceback.py", line 509, in __init__
capture_locals=capture_locals)
File "/usr/lib64/python3.6/traceback.py", line 338, in extract
if limit >= 0:
TypeError: '>=' not supported between instances of 'VolumeIsBusy' and 'int'
3 years, 6 months
OVA Export to local storage failes
by David White
I have an unused 200GB partition that I'd like to use to copy / export / backup a few VMs onto, so I mounted it to one of my oVirt hosts as /ova-images/, and then ran "chown 36:36" on ova-images.
From the engine, I then tried to export an OVA to that directory.
Watching the directory with "ls", I see a filename.ova.tmp eventually appear, and it grows to the size I would expect for the image... and then a few seconds later, it disappears.
What am I missing?
Here's what I see in the Event Manager inside the Engine:
Aug 21, 2021, 8:35:26 PM
Failed to export Vm server.example.org as a Virtual Appliance to path /ova-images/server.example.org.ova on Host cha2-storage.mgt.example.com
Aug 21, 2021, 8:34:33 PM
Pack OVA. Retrieving the temporary path for the OVA file.
Aug 21, 2021, 8:34:33 PM
Pack OVA. Allocating the temporary path for the OVA file.
Aug 21, 2021, 8:34:33 PM
Pack OVA. Removing the temporary file.
Aug 21, 2021, 8:34:33 PM
Pack OVA. Examine target directory.
Aug 21, 2021, 8:34:33 PM
Pack OVA. Set facts.
Aug 21, 2021, 8:34:33 PM
Pack OVA. Run import yaml on py3.
Aug 21, 2021, 8:34:21 PM
Image measure. Measure an image.
Aug 21, 2021, 8:33:56 PM
Starting to export Vm server.example.org as a Virtual Appliance
Aug 21, 2021, 8:33:52 PM
Export OVA. Examine target directory.
Aug 21, 2021, 8:33:49 PM
Export OVA. Set facts.
Aug 21, 2021, 8:33:49 PM
Export OVA. Run import yaml on py3.
Sent with ProtonMail Secure Email.
3 years, 6 months
about the network name rules on CentOS/Redhat 8 and the cloud-init network interface name
by Tommy Sway
Everybody is good!
As you all know, to use the client's network card name in cloud-init, you
must fill in the exact name of the network card interface.
This was easy in version 7 and before, which usually started with EN0.
After version 8, however, the naming conventions for network cards changed a
lot. In my own test environment, for example, I started with ENS3.
I'm not sure what naming convention it uses, which would make it impossible
to specify nic information using cloud-init.
Could you help me explain how to deal with this problem?
Thank you very much!
3 years, 6 months
Backup to tape
by duparchy@esrf.fr
Hi,
Part of our Disaster Recovery Plan we do tape backup.
Our previous infrastructure was Oracle VM and VMs fisk were files (.img) we could apply filters to our tar and backup to tape only required files.
Now preparing the migration to Oracle flavor of oVirt (OLVM).
VMs disks are LVM partitions, within an iSCSI LUNs.
I've tested "dd" to backup the entire iSCSI LUN to the tape. Seems OK ,providing the right block size we achieve reasonable performances.
Though I don't quite see how to have that granular backup we had, in a simple manner, which a goal too.
Although there is compression at the LTO level, the dd conv=sparse parameter may speed-up things. To be tested.
I'm just wondering if conv=sparse may breaks things at the LVM / qcow2 containers ?
3 years, 6 months
Impossible to move disk after a previous disk move failed
by James Wadsworth
This is the log of when it fails
2021-08-23 21:24:10,667+0200 WARN (tasks/0) [storage.LVM] Command with specific filter failed or returned no data, retrying with a wider filter: LVM command failed: 'cmd=[\'/sbin/lvm\', \'lvcreate\', \'--config\', \'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/36001405299f83b19569473f9c580660c$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 use_lvmpolld=1 } backup { retain_min=50 retain_days=0 }\', \'--autobackup\', \'n\', \'--contiguous\', \'n\', \'--size\', \'40960m\', \'--wipesignatures\', \'n\', \'--addtag\', \'OVIRT_VOL_INITIALIZING\', \'--name\', \'432ceb20-efb7-4a40-8431-1b5c825a6168\', \'c23a5bef-48e0-46c7-9d5b-93c97f0240c0\'] rc=5 out=[] err=[\' Logical Volume "432ceb20-efb7-4a40-8431-1b5c825a6168" already exists in volume group "c23a5bef-48e0-46c7-9d5b-93c97f0240c0"\']' (l
vm:534)
2021-08-23 21:24:10,859+0200 WARN (tasks/0) [storage.LVM] All 2 tries have failed: LVM command failed: 'cmd=[\'/sbin/lvm\', \'lvcreate\', \'--config\', \'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/36001405299f83b19569473f9c580660c$|^/dev/mapper/36001405cdf35411dd040d4121d9326d1$|^/dev/mapper/36001405df393063de6f0d4451d8a61d3$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 use_lvmpolld=1 } backup { retain_min=50 retain_days=0 }\', \'--autobackup\', \'n\', \'--contiguous\', \'n\', \'--size\', \'40960m\', \'--wipesignatures\', \'n\', \'--addtag\', \'OVIRT_VOL_INITIALIZING\', \'--name\', \'432ceb20-efb7-4a40-8431-1b5c825a6168\', \'c23a5bef-48e0-46c7-9d5b-93c97f0240c0\'] rc=5 out=[] err=[\' Logical Volume "432ceb20-efb7-4a40-8431-1b5c825a6168" already exists in volume group "c23a5bef-4
8e0-46c7-9d5b-93c97f0240c0"\']' (lvm:561)
2021-08-23 21:24:10,859+0200 ERROR (tasks/0) [storage.Volume] Failed to create volume /rhev/data-center/mnt/blockSD/c23a5bef-48e0-46c7-9d5b-93c97f0240c0/images/2172a4ac-6992-4cc2-be1b-6b9290bc9798/432ceb20-efb7-4a40-8431-1b5c825a6168: Cannot create Logical Volume: 'vgname=c23a5bef-48e0-46c7-9d5b-93c97f0240c0 lvname=432ceb20-efb7-4a40-8431-1b5c825a6168 err=[\' Logical Volume "432ceb20-efb7-4a40-8431-1b5c825a6168" already exists in volume group "c23a5bef-48e0-46c7-9d5b-93c97f0240c0"\']' (volume:1257)
2021-08-23 21:24:10,860+0200 ERROR (tasks/0) [storage.Volume] Unexpected error (volume:1293)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/volume.py", line 1254, in create
add_bitmaps=add_bitmaps)
File "/usr/lib/python3.6/site-packages/vdsm/storage/blockVolume.py", line 508, in _create
initialTags=(sc.TAG_VOL_UNINIT,))
File "/usr/lib/python3.6/site-packages/vdsm/storage/lvm.py", line 1633, in createLV
raise se.CannotCreateLogicalVolume(vgName, lvName, err)
vdsm.storage.exception.CannotCreateLogicalVolume: Cannot create Logical Volume: 'vgname=c23a5bef-48e0-46c7-9d5b-93c97f0240c0 lvname=432ceb20-efb7-4a40-8431-1b5c825a6168 err=[\' Logical Volume "432ceb20-efb7-4a40-8431-1b5c825a6168" already exists in volume group "c23a5bef-48e0-46c7-9d5b-93c97f0240c0"\']'
2021-08-23 21:24:10,860+0200 ERROR (tasks/0) [storage.TaskManager.Task] (Task='55a4e8dc-9408-4969-b0ba-b9a556bccba1') Unexpected error (task:877)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 884, in _run
return fn(*args, **kargs)
File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 350, in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/lib/python3.6/site-packages/vdsm/storage/securable.py", line 79, in wrapper
return method(self, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 1945, in createVolume
initial_size=initialSize, add_bitmaps=addBitmaps)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 1216, in createVolume
initial_size=initial_size, add_bitmaps=add_bitmaps)
File "/usr/lib/python3.6/site-packages/vdsm/storage/volume.py", line 1254, in create
add_bitmaps=add_bitmaps)
File "/usr/lib/python3.6/site-packages/vdsm/storage/blockVolume.py", line 508, in _create
initialTags=(sc.TAG_VOL_UNINIT,))
File "/usr/lib/python3.6/site-packages/vdsm/storage/lvm.py", line 1633, in createLV
raise se.CannotCreateLogicalVolume(vgName, lvName, err)
vdsm.storage.exception.CannotCreateLogicalVolume: Cannot create Logical Volume: 'vgname=c23a5bef-48e0-46c7-9d5b-93c97f0240c0 lvname=432ceb20-efb7-4a40-8431-1b5c825a6168 err=[\' Logical Volume "432ceb20-efb7-4a40-8431-1b5c825a6168" already exists in volume group "c23a5bef-48e0-46c7-9d5b-93c97f0240c0"\']'
The logical volume it is trying to create already exists in the volume group. Can anyone help me get out of this situation?
Thanks, James
3 years, 6 months
Re: [External] : Re: Is it possible to configure the wireless network card on the Linux host as the bridge to provide oVirt for use?
by Tommy Sway
Thank you!
From: Marcos Sungaila <marcos.sungaila(a)oracle.com>
Sent: Monday, August 23, 2021 11:39 PM
To: Tommy Sway <sz_cuitao(a)163.com>; 'wodel youchi' <wodel.youchi(a)gmail.com>
Cc: 'users' <users(a)ovirt.org>
Subject: RE: [ovirt-users] Re: [External] : Re: Is it possible to configure the wireless network card on the Linux host as the bridge to provide oVirt for use?
Tommy,
I remember I used the doc from Debian page as a starting point and adapted to Fedora.
I deactivated the NetworkManager on the WLAN interface and prepared many commands to add to rc.local.
I remember I had issues with routing when using NAT network (it was a KVM host only, not an oVirt instance) despite it woked fine with bridge connections.
I will look if I have a copy in my old backups. It was a long time ago.
Marcos
From: Tommy Sway <sz_cuitao(a)163.com <mailto:sz_cuitao@163.com> >
Sent: segunda-feira, 23 de agosto de 2021 12:05
To: Marcos Sungaila <marcos.sungaila(a)oracle.com <mailto:marcos.sungaila@oracle.com> >; 'wodel youchi' <wodel.youchi(a)gmail.com <mailto:wodel.youchi@gmail.com> >
Cc: 'users' <users(a)ovirt.org <mailto:users@ovirt.org> >
Subject: [ovirt-users] Re: [External] : Re: Is it possible to configure the wireless network card on the Linux host as the bridge to provide oVirt for use?
Thank you!
Could you give me some guide doc ?
Moreover, my use environment is only for testing functions and does not involve security issues. I just want to make full use of physical resources.
From: users-bounces(a)ovirt.org <mailto:users-bounces@ovirt.org> <users-bounces(a)ovirt.org <mailto:users-bounces@ovirt.org> > On Behalf Of Marcos Sungaila
Sent: Monday, August 23, 2021 9:13 PM
To: wodel youchi <wodel.youchi(a)gmail.com <mailto:wodel.youchi@gmail.com> >; Tommy Sway <sz_cuitao(a)163.com <mailto:sz_cuitao@163.com> >
Cc: users <users(a)ovirt.org <mailto:users@ovirt.org> >
Subject: [ovirt-users] Re: [External] : Re: Is it possible to configure the wireless network card on the Linux host as the bridge to provide oVirt for use?
Hi Tommy,
Two comments for your appreciation:
1st: Technically, it is possible. You can configure your wireless network as a bridge and use it as you wish. I did it in my laptop a long time ago to test KVM instances.
2nd: Taking security in focus, it is not a recommendation to use wireless cards in servers. Wireless connections can be easily attacked since there is no need for someone to have logical access to your server.
Regards,
Marcos
From: wodel youchi <wodel.youchi(a)gmail.com <mailto:wodel.youchi@gmail.com> >
Sent: domingo, 22 de agosto de 2021 07:13
To: Tommy Sway <sz_cuitao(a)163.com <mailto:sz_cuitao@163.com> >
Cc: users <users(a)ovirt.org <mailto:users@ovirt.org> >
Subject: [External] : [ovirt-users] Re: Is it possible to configure the wireless network card on the Linux host as the bridge to provide oVirt for use?
Hi,
In my little experience, no you can't, I tried that a while ago it didn't work, there is some old tutorial on Debian but it didn't work for me.
May be what you can do is share you wireless connexion with your wired connexion (where you can create your bridge). And play with the firewall to expose the ovirt webadmin-ui to your external network.
Regards.
Le dim. 22 août 2021 à 07:18, Tommy Sway <sz_cuitao(a)163.com <mailto:sz_cuitao@163.com> > a écrit :
Thanks!
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://urldefense.com/v3/__https:/www.ovirt.org/privacy-policy.html__;!!...>
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://urldefense.com/v3/__https:/www.ovirt.org/community/about/communit...>
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRZB5SQD4NO... <https://urldefense.com/v3/__https:/lists.ovirt.org/archives/list/users@ov...>
3 years, 6 months
Re: [External] : Re: Is it possible to configure the wireless network card on the Linux host as the bridge to provide oVirt for use?
by Tommy Sway
Thank you!
Could you give me some guide doc ?
Moreover, my use environment is only for testing functions and does not involve security issues. I just want to make full use of physical resources.
From: users-bounces(a)ovirt.org <users-bounces(a)ovirt.org> On Behalf Of Marcos Sungaila
Sent: Monday, August 23, 2021 9:13 PM
To: wodel youchi <wodel.youchi(a)gmail.com>; Tommy Sway <sz_cuitao(a)163.com>
Cc: users <users(a)ovirt.org>
Subject: [ovirt-users] Re: [External] : Re: Is it possible to configure the wireless network card on the Linux host as the bridge to provide oVirt for use?
Hi Tommy,
Two comments for your appreciation:
1st: Technically, it is possible. You can configure your wireless network as a bridge and use it as you wish. I did it in my laptop a long time ago to test KVM instances.
2nd: Taking security in focus, it is not a recommendation to use wireless cards in servers. Wireless connections can be easily attacked since there is no need for someone to have logical access to your server.
Regards,
Marcos
From: wodel youchi <wodel.youchi(a)gmail.com <mailto:wodel.youchi@gmail.com> >
Sent: domingo, 22 de agosto de 2021 07:13
To: Tommy Sway <sz_cuitao(a)163.com <mailto:sz_cuitao@163.com> >
Cc: users <users(a)ovirt.org <mailto:users@ovirt.org> >
Subject: [External] : [ovirt-users] Re: Is it possible to configure the wireless network card on the Linux host as the bridge to provide oVirt for use?
Hi,
In my little experience, no you can't, I tried that a while ago it didn't work, there is some old tutorial on Debian but it didn't work for me.
May be what you can do is share you wireless connexion with your wired connexion (where you can create your bridge). And play with the firewall to expose the ovirt webadmin-ui to your external network.
Regards.
Le dim. 22 août 2021 à 07:18, Tommy Sway <sz_cuitao(a)163.com <mailto:sz_cuitao@163.com> > a écrit :
Thanks!
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://urldefense.com/v3/__https:/www.ovirt.org/privacy-policy.html__;!!...>
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://urldefense.com/v3/__https:/www.ovirt.org/community/about/communit...>
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRZB5SQD4NO... <https://urldefense.com/v3/__https:/lists.ovirt.org/archives/list/users@ov...>
3 years, 6 months