Importing VM fails with "No space left on device"

Hello List, I am facing the following issue when I try to import a VM from a KVM host to my oVirt (4.4.5.11-1.el8). The importing I done throuth GUI using the option of KVM provider. -- Log1: # cat /var/log/vdsm/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T124121.log [ 0.0] preparing for copy [ 0.0] Copying disk 1/1 to /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/cb63ffc9-07ee-4323-9e8a-378be31ae3f7/e7e69cbc-47bf-4557-ae02-ca1c53c8423f Traceback (most recent call last): File "/usr/libexec/vdsm/kvm2ovirt", line 23, in <module> kvm2ovirt.main() File "/usr/lib/python3.6/site-packages/vdsm/kvm2ovirt.py", line 277, in main handle_volume(con, diskno, src, dst, options) File "/usr/lib/python3.6/site-packages/vdsm/kvm2ovirt.py", line 228, in handle_volume download_disk(sr, estimated_size, None, dst, options.bufsize) File "/usr/lib/python3.6/site-packages/vdsm/kvm2ovirt.py", line 169, in download_disk op.run() File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 57, in run res = self._run() File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 163, in _run self._write_chunk(count) File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/ops.py", line 188, in _write_chunk n = self._dst.write(v) File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/backends/file.py", line 88, in write return util.uninterruptible(self._fio.write, buf) File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/util.py", line 20, in uninterruptible return func(*args) OSError: [Errno 28] No space left on device -- Log2: # cat /var/log/vdsm/vdsm.log 2021-05-07 10:29:49,813-0500 DEBUG (v2v/57f84423) [root] START thread <Thread(v2v/57f84423, started daemon 140273162123008)> (func=<bound method ImportVm._run of <vdsm.v2v.ImportVm object at 0x7f946051c5c0>>, args=(), kwargs={}) (concurrent:258) 2021-05-07 10:29:49,813-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' starting import (v2v:880) 2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') moving from state preparing -> state preparing (task:624) 2021-05-07 10:29:49,814-0500 INFO (v2v/57f84423) [vdsm.api] START prepareImage(sdUUID='cc9fae8e-b714-44cf-9dac-3a83a15b0455', spUUID='24d9d2fa-98f9-11eb-aea7-00163e09cc71', imgUUID='226cc137-1992-4246-9484-80a1bfb5e9f7', leafUUID='847bc460-1b54-4756-8ced-4b969c399900', allowIllegal=False) from=internal, task_id=58a7bdc0-0f7e-4307-92ba-040f1a272721 (api:48) 2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to register resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' for lock type 'shared' (resourceManager:474) 2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free. Now locking as 'shared' (1 active user) (resourceManager:531) 2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Request] (ResName='00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', ReqID='b2ebef3c-8b3d-4429-b9da-b6f3af2c9ac4') Granted request (resourceManager:221) 2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') _resourcesAcquired: 00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455 (shared) (task:856) 2021-05-07 10:29:49,814-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') ref 1 aborting False (task:1008) 2021-05-07 10:29:49,815-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cc9fae8e-b714-44cf-9dac-3a83a15b0455 (cwd None) (commands:153) 2021-05-07 10:29:49,900-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185) 2021-05-07 10:29:49,902-0500 DEBUG (v2v/57f84423) [storage.LVM] lvs reloaded (lvm:759) 2021-05-07 10:29:49,904-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=2096 bs=512 if=/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/metadata count=1 (cwd None) (commands:211) 2021-05-07 10:29:49,916-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000228452 s, 2.2 MB/s\n'; <rc> = 0 (commands:224) 2021-05-07 10:29:49,916-0500 DEBUG (v2v/57f84423) [storage.Misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000228452 s, 2.2 MB/s'], size: 512 (misc:114) 2021-05-07 10:29:49,917-0500 INFO (v2v/57f84423) [storage.LVM] Activating lvs: vg=cc9fae8e-b714-44cf-9dac-3a83a15b0455 lvs=['847bc460-1b54-4756-8ced-4b969c399900'] (lvm:1738) 2021-05-07 10:29:49,917-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvchange --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --autobackup n --available y cc9fae8e-b714-44cf-9dac-3a83a15b0455/847bc460-1b54-4756-8ced-4b969c399900 (cwd None) (commands:153) 2021-05-07 10:29:50,034-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185) 2021-05-07 10:29:50,035-0500 INFO (v2v/57f84423) [storage.StorageDomain] Creating image run directory '/run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7' (blockSD:1362) 2021-05-07 10:29:50,035-0500 INFO (v2v/57f84423) [storage.fileUtils] Creating directory: /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7 mode: None (fileUtils:201) 2021-05-07 10:29:50,036-0500 INFO (v2v/57f84423) [storage.StorageDomain] Creating symlink from /dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/847bc460-1b54-4756-8ced-4b969c399900 to /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900 (blockSD:1367) 2021-05-07 10:29:50,037-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cc9fae8e-b714-44cf-9dac-3a83a15b0455 (cwd None) (commands:153) 2021-05-07 10:29:50,119-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185) 2021-05-07 10:29:50,121-0500 DEBUG (v2v/57f84423) [storage.LVM] lvs reloaded (lvm:759) 2021-05-07 10:29:50,122-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=2096 bs=512 if=/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/metadata count=1 (cwd None) (commands:211) 2021-05-07 10:29:50,135-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000212608 s, 2.4 MB/s\n'; <rc> = 0 (commands:224) 2021-05-07 10:29:50,135-0500 DEBUG (v2v/57f84423) [storage.Misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000212608 s, 2.4 MB/s'], size: 512 (misc:114) 2021-05-07 10:29:50,136-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] /usr/bin/taskset --cpu-list 0-23 /usr/bin/dd iflag=direct skip=2096 bs=512 if=/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/metadata count=1 (cwd None) (commands:211) 2021-05-07 10:29:50,143-0500 DEBUG (v2v/57f84423) [storage.Misc.excCmd] SUCCESS: <err> = b'1+0 records in\n1+0 records out\n512 bytes copied, 0.000248951 s, 2.1 MB/s\n'; <rc> = 0 (commands:224) 2021-05-07 10:29:50,143-0500 DEBUG (v2v/57f84423) [storage.Misc] err: [b'1+0 records in', b'1+0 records out', b'512 bytes copied, 0.000248951 s, 2.1 MB/s'], size: 512 (misc:114) 2021-05-07 10:29:50,143-0500 DEBUG (v2v/57f84423) [root] /usr/bin/taskset --cpu-list 0-23 /usr/bin/qemu-img info --output json -U /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900 (cwd None) (commands:211) 2021-05-07 10:29:50,161-0500 DEBUG (v2v/57f84423) [root] SUCCESS: <err> = b''; <rc> = 0 (commands:224) 2021-05-07 10:29:50,162-0500 INFO (v2v/57f84423) [storage.StorageDomain] Creating symlink from /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7 to /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7 (blockSD:1332) 2021-05-07 10:29:50,162-0500 DEBUG (v2v/57f84423) [storage.StorageDomain] path to image directory already exists: /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7 (blockSD:1338) 2021-05-07 10:29:50,163-0500 INFO (v2v/57f84423) [vdsm.api] FINISH prepareImage return={'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'info': {'type': 'block', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900'}, 'imgVolumesInfo': [{'domainID': 'cc9fae8e-b714-44cf-9dac-3a83a15b0455', 'imageID': '226cc137-1992-4246-9484-80a1bfb5e9f7', 'volumeID': '847bc460-1b54-4756-8ced-4b969c399900', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'leasePath': '/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/leases', 'leaseOffset': 108003328}]} from=internal, task_id=58a7bdc0-0f7e-4307-92ba-040f1a272721 (api:54) 2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') finished: {'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'info': {'type': 'block', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900'}, 'imgVolumesInfo': [{'domainID': 'cc9fae8e-b714-44cf-9dac-3a83a15b0455', 'imageID': '226cc137-1992-4246-9484-80a1bfb5e9f7', 'volumeID': '847bc460-1b54-4756-8ced-4b969c399900', 'path': '/rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900', 'leasePath': '/dev/cc9fae8e-b714-44cf-9dac-3a83a15b0455/leases', 'leaseOffset': 108003328}]} (task:1210) 2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') moving from state finished -> state finished (task:624) 2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Owner] Owner.releaseAll resources %s (resourceManager:742) 2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to release resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (resourceManager:546) 2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Released resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (0 active users) (resourceManager:564) 2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free, finding out if anyone is waiting for it. (resourceManager:570) 2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] No one is waiting for resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', Clearing records. (resourceManager:578) 2021-05-07 10:29:50,163-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='58a7bdc0-0f7e-4307-92ba-040f1a272721') ref 0 aborting False (task:1008) 2021-05-07 10:29:50,164-0500 INFO (v2v/57f84423) [root] Storing import log at: '/var/log/vdsm/import/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T102950.log' (v2v:436) 2021-05-07 10:29:50,170-0500 DEBUG (v2v/57f84423) [root] /usr/bin/taskset --cpu-list 0-23 /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/libexec/vdsm/kvm2ovirt --uri qemu+tcp://root@172.16.0.61/system --bufsize 1048576 --source /var/lib/libvirt/images/vm_powervp-si.qcow2 --dest /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/226cc137-1992-4246-9484-80a1bfb5e9f7/847bc460-1b54-4756-8ced-4b969c399900 --storage-type volume --vm-name vm_powervp-si --allocation sparse (cwd None) (v2v:1511) 2021-05-07 10:29:50,175-0500 DEBUG (v2v/57f84423) [root] /usr/bin/taskset --cpu-list 0-23 /usr/bin/nice -n 19 /usr/bin/ionice -c 3 tee /var/log/vdsm/import/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T102950.log (cwd None) (v2v:1511) 2021-05-07 10:29:50,274-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copying disk 1/1 (v2v:912) 2021-05-07 10:29:50,277-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 0/100 (v2v:921) 2021-05-07 10:29:51,277-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 0/100 (v2v:921) 2021-05-07 10:29:52,277-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 0/100 (v2v:921) 2021-05-07 10:30:14,283-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 10/100 (v2v:921) 2021-05-07 10:30:15,283-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 10/100 (v2v:921) 2021-05-07 10:30:16,283-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 10/100 (v2v:921) 2021-05-07 10:30:39,288-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 20/100 (v2v:921) 2021-05-07 10:30:40,288-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 20/100 (v2v:921) 2021-05-07 10:30:46,281-0500 INFO (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' copy disk 1 progress 100/100 (v2v:921) 2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') moving from state preparing -> state preparing (task:624) 2021-05-07 10:30:46,407-0500 INFO (v2v/57f84423) [vdsm.api] START teardownImage(sdUUID='cc9fae8e-b714-44cf-9dac-3a83a15b0455', spUUID='24d9d2fa-98f9-11eb-aea7-00163e09cc71', imgUUID='226cc137-1992-4246-9484-80a1bfb5e9f7', volUUID=None) from=internal, task_id=ecbcd983-2f33-45a4-b962-7fc4b9342822 (api:48) 2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to register resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' for lock type 'shared' (resourceManager:474) 2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free. Now locking as 'shared' (1 active user) (resourceManager:531) 2021-05-07 10:30:46,407-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Request] (ResName='00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', ReqID='1744a1a5-d543-4528-be79-c752bce08263') Granted request (resourceManager:221) 2021-05-07 10:30:46,408-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') _resourcesAcquired: 00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455 (shared) (task:856) 2021-05-07 10:30:46,408-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') ref 1 aborting False (task:1008) 2021-05-07 10:30:46,408-0500 INFO (v2v/57f84423) [storage.StorageDomain] Removing image run directory '/run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7' (blockSD:1386) 2021-05-07 10:30:46,408-0500 INFO (v2v/57f84423) [storage.fileUtils] Removing directory: /run/vdsm/storage/cc9fae8e-b714-44cf-9dac-3a83a15b0455/226cc137-1992-4246-9484-80a1bfb5e9f7 (fileUtils:182) 2021-05-07 10:30:46,408-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvs --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cc9fae8e-b714-44cf-9dac-3a83a15b0455 (cwd None) (commands:153) 2021-05-07 10:30:46,510-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185) 2021-05-07 10:30:46,511-0500 DEBUG (v2v/57f84423) [storage.LVM] lvs reloaded (lvm:759) 2021-05-07 10:30:46,512-0500 INFO (v2v/57f84423) [storage.LVM] Deactivating lvs: vg=cc9fae8e-b714-44cf-9dac-3a83a15b0455 lvs=['847bc460-1b54-4756-8ced-4b969c399900'] (lvm:1746) 2021-05-07 10:30:46,512-0500 DEBUG (v2v/57f84423) [common.commands] /usr/bin/taskset --cpu-list 0-23 /usr/bin/sudo -n /sbin/lvm lvchange --config 'devices { preferred_names=["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter=["a|^/dev/mapper/360060e8007dfc8000030dfc80000113f$|", "r|.*|"] hints="none" obtain_device_list_from_udev=0 } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min=50 retain_days=0 }' --autobackup n --available n cc9fae8e-b714-44cf-9dac-3a83a15b0455/847bc460-1b54-4756-8ced-4b969c399900 (cwd None) (commands:153) 2021-05-07 10:30:46,629-0500 DEBUG (v2v/57f84423) [common.commands] SUCCESS: <err> = b''; <rc> = 0 (commands:185) 2021-05-07 10:30:46,630-0500 INFO (v2v/57f84423) [vdsm.api] FINISH teardownImage return=None from=internal, task_id=ecbcd983-2f33-45a4-b962-7fc4b9342822 (api:54) 2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') finished: None (task:1210) 2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') moving from state finished -> state finished (task:624) 2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.ResourceManager.Owner] Owner.releaseAll resources %s (resourceManager:742) 2021-05-07 10:30:46,630-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Trying to release resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (resourceManager:546) 2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Released resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' (0 active users) (resourceManager:564) 2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] Resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455' is free, finding out if anyone is waiting for it. (resourceManager:570) 2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.ResourceManager] No one is waiting for resource '00_storage.cc9fae8e-b714-44cf-9dac-3a83a15b0455', Clearing records. (resourceManager:578) 2021-05-07 10:30:46,631-0500 DEBUG (v2v/57f84423) [storage.TaskManager.Task] (Task='ecbcd983-2f33-45a4-b962-7fc4b9342822') ref 0 aborting False (task:1008) 2021-05-07 10:30:46,631-0500 ERROR (v2v/57f84423) [root] Job '57f84423-56cb-4187-86e2-f4208348e1f5' failed (v2v:869) 2021-05-07 10:30:46,635-0500 DEBUG (v2v/57f84423) [root] FINISH thread <Thread(v2v/57f84423, stopped daemon 140273162123008)> (concurrent:261) -- Details of the enviroment: # df -Ph Filesystem Size Used Avail Use% Mounted on devtmpfs 32G 0 32G 0% /dev tmpfs 32G 4.0K 32G 1% /dev/shm tmpfs 32G 26M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/onn-ovirt--node--ng--4.4.5.1--0.20210323.0+1 584G 11G 573G 2% / /dev/mapper/onn-home 1014M 40M 975M 4% /home /dev/mapper/onn-tmp 1014M 40M 975M 4% /tmp /dev/sda2 1014M 479M 536M 48% /boot /dev/mapper/onn-var 30G 3.2G 27G 11% /var /dev/sda1 599M 6.9M 592M 2% /boot/efi /dev/mapper/onn-var_log 8.0G 498M 7.6G 7% /var/log /dev/mapper/onn-var_crash 10G 105M 9.9G 2% /var/crash /dev/mapper/onn-var_log_audit 2.0G 84M 2.0G 5% /var/log/audit tmpfs 6.3G 0 6.3G 0% /run/user/0 /dev/mapper/da3e3aff--0bfc--42cd--944f--f6145c50134a-master 976M 1.3M 924M 1% /rhev/data-center/mnt/blockSD/da3e3aff-0bfc-42cd-944f-f6145c50134a/master /dev/mapper/onn-lv_iso 12G 11G 1.6G 88% /rhev/data-center/mnt/_dev_mapper_onn-lv__iso 172.19.1.80:/exportdomain 584G 11G 573G 2% /rhev/data-center/mnt/172.19.1.80:_exportdomain * Inodes available = 99%. # qemu-img info /var/lib/libvirt/images/vm_powervp-si.qcow2 image: /var/lib/libvirt/images/vm_powervp-si.qcow2 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 4.2G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true

On Sun, May 9, 2021 at 8:54 AM <j.velasco@outlook.com> wrote:
Hello List, I am facing the following issue when I try to import a VM from a KVM host to my oVirt (4.4.5.11-1.el8). The importing I done throuth GUI using the option of KVM provider.
-- Log1: # cat /var/log/vdsm/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T124121.log [ 0.0] preparing for copy [ 0.0] Copying disk 1/1 to /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/cb63ffc9-07ee-4323-9e8a-378be31ae3f7/e7e69cbc-47bf-4557-ae02-ca1c53c8423f Traceback (most recent call last):
...
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/util.py", line 20, in uninterruptible return func(*args) OSError: [Errno 28] No space left on device
Looks like the disk: /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/cb63ffc9-07ee-4323-9e8a-378be31ae3f7/e7e69cbc-47bf-4557-ae02-ca1c53c8423f Was created with the wrong initial size. When importing vms from libvirt I don't think we have a way to get the required allocation of the disk, so the disk must be created with initial_size=virtual_size, and this was probably not done in this case. Please file a bug and include full vdsm from the SPM host and from the host running the import, and full engine logs. The log should show the creation of the target disk cb63ffc9-07ee-4323-9e8a-378be31ae3f7. You can grep this uuid in vdsm logs (/var/log/vdsm/vdsm.log*) on the spm host, the host running the import, and engine host (/var/log/ovirt-engine/engine.log).
-- Details of the enviroment: # df -Ph Filesystem Size Used Avail Use% Mounted on devtmpfs 32G 0 32G 0% /dev tmpfs 32G 4.0K 32G 1% /dev/shm tmpfs 32G 26M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/onn-ovirt--node--ng--4.4.5.1--0.20210323.0+1 584G 11G 573G 2% / /dev/mapper/onn-home 1014M 40M 975M 4% /home /dev/mapper/onn-tmp 1014M 40M 975M 4% /tmp /dev/sda2 1014M 479M 536M 48% /boot /dev/mapper/onn-var 30G 3.2G 27G 11% /var /dev/sda1 599M 6.9M 592M 2% /boot/efi /dev/mapper/onn-var_log 8.0G 498M 7.6G 7% /var/log /dev/mapper/onn-var_crash 10G 105M 9.9G 2% /var/crash /dev/mapper/onn-var_log_audit 2.0G 84M 2.0G 5% /var/log/audit tmpfs 6.3G 0 6.3G 0% /run/user/0 /dev/mapper/da3e3aff--0bfc--42cd--944f--f6145c50134a-master 976M 1.3M 924M 1% /rhev/data-center/mnt/blockSD/da3e3aff-0bfc-42cd-944f-f6145c50134a/master /dev/mapper/onn-lv_iso 12G 11G 1.6G 88% /rhev/data-center/mnt/_dev_mapper_onn-lv__iso 172.19.1.80:/exportdomain 584G 11G 573G 2% /rhev/data-center/mnt/172.19.1.80:_exportdomain
* Inodes available = 99%.
The available space on the host is not related, the issue is creating a big enough disk when creating sparse volume on block storage.
# qemu-img info /var/lib/libvirt/images/vm_powervp-si.qcow2 image: /var/lib/libvirt/images/vm_powervp-si.qcow2 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 4.2G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true
Since you have access to the disk you want to import, you can upload it to oVirt and create a new vm with the disk, instead of importing via libvirt. Nir
participants (2)
-
j.velasco@outlook.com
-
Nir Soffer