
On Sun, May 9, 2021 at 8:54 AM <j.velasco@outlook.com> wrote:
Hello List, I am facing the following issue when I try to import a VM from a KVM host to my oVirt (4.4.5.11-1.el8). The importing I done throuth GUI using the option of KVM provider.
-- Log1: # cat /var/log/vdsm/import-57f84423-56cb-4187-86e2-f4208348e1f5-20210507T124121.log [ 0.0] preparing for copy [ 0.0] Copying disk 1/1 to /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/cb63ffc9-07ee-4323-9e8a-378be31ae3f7/e7e69cbc-47bf-4557-ae02-ca1c53c8423f Traceback (most recent call last):
...
File "/usr/lib64/python3.6/site-packages/ovirt_imageio/_internal/util.py", line 20, in uninterruptible return func(*args) OSError: [Errno 28] No space left on device
Looks like the disk: /rhev/data-center/mnt/blockSD/cc9fae8e-b714-44cf-9dac-3a83a15b0455/images/cb63ffc9-07ee-4323-9e8a-378be31ae3f7/e7e69cbc-47bf-4557-ae02-ca1c53c8423f Was created with the wrong initial size. When importing vms from libvirt I don't think we have a way to get the required allocation of the disk, so the disk must be created with initial_size=virtual_size, and this was probably not done in this case. Please file a bug and include full vdsm from the SPM host and from the host running the import, and full engine logs. The log should show the creation of the target disk cb63ffc9-07ee-4323-9e8a-378be31ae3f7. You can grep this uuid in vdsm logs (/var/log/vdsm/vdsm.log*) on the spm host, the host running the import, and engine host (/var/log/ovirt-engine/engine.log).
-- Details of the enviroment: # df -Ph Filesystem Size Used Avail Use% Mounted on devtmpfs 32G 0 32G 0% /dev tmpfs 32G 4.0K 32G 1% /dev/shm tmpfs 32G 26M 32G 1% /run tmpfs 32G 0 32G 0% /sys/fs/cgroup /dev/mapper/onn-ovirt--node--ng--4.4.5.1--0.20210323.0+1 584G 11G 573G 2% / /dev/mapper/onn-home 1014M 40M 975M 4% /home /dev/mapper/onn-tmp 1014M 40M 975M 4% /tmp /dev/sda2 1014M 479M 536M 48% /boot /dev/mapper/onn-var 30G 3.2G 27G 11% /var /dev/sda1 599M 6.9M 592M 2% /boot/efi /dev/mapper/onn-var_log 8.0G 498M 7.6G 7% /var/log /dev/mapper/onn-var_crash 10G 105M 9.9G 2% /var/crash /dev/mapper/onn-var_log_audit 2.0G 84M 2.0G 5% /var/log/audit tmpfs 6.3G 0 6.3G 0% /run/user/0 /dev/mapper/da3e3aff--0bfc--42cd--944f--f6145c50134a-master 976M 1.3M 924M 1% /rhev/data-center/mnt/blockSD/da3e3aff-0bfc-42cd-944f-f6145c50134a/master /dev/mapper/onn-lv_iso 12G 11G 1.6G 88% /rhev/data-center/mnt/_dev_mapper_onn-lv__iso 172.19.1.80:/exportdomain 584G 11G 573G 2% /rhev/data-center/mnt/172.19.1.80:_exportdomain
* Inodes available = 99%.
The available space on the host is not related, the issue is creating a big enough disk when creating sparse volume on block storage.
# qemu-img info /var/lib/libvirt/images/vm_powervp-si.qcow2 image: /var/lib/libvirt/images/vm_powervp-si.qcow2 file format: qcow2 virtual size: 20G (21474836480 bytes) disk size: 4.2G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: true
Since you have access to the disk you want to import, you can upload it to oVirt and create a new vm with the disk, instead of importing via libvirt. Nir