Did anyone have the chance to look at this problem?
It seems that it may be related to another problem we have encountered when
trying to import a vm with qcow2 disks that have been sparsified first
(virt-sparsify), from a kvm/libvirt provider to a iSCSI storage domain.
In those case, we get an error that the import failed, and in the import
log we can see a similar "qemu-img: error while writing at byte xxx: No
space left on device"
Obviously, it is not a storage space problem as in both situations we are
using an iSCSI LUN with ample free space.
Best regards,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
On Thu, Jul 21, 2022 at 11:49 PM Guillaume Pavese <
guillaume.pavese(a)interactiv-group.com> wrote:
On a 4.5.1 DC, I have imported a vm and its disk from an old 4.3 DC
(through an export domain if that's relevant)
The DC/Cluster compat level is 4.7 and the vm was upgraded to it.
"Original custom compatibility version 4.3 of imported VM xxx is not
supported. Changing it to the lowest supported version: 4.7."
The disk is raw and sparse :
<format>raw</format>
<sparse>true</sparse>
I initially put the VM's disks on an NFS storage domain, but I want to
move the disks to an iSCSI one
However, after copying data for a while the task fails "User has failed
to move disk VM-TEMPLATE-COS7_Disk1 to domain iSCSI-STO-FR-301"
in engine.log :
qemu-img: error while writing at byte xxx: No space left on device
2022-07-21 08:58:23,240+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHostJobsVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48)
[65fed1dc-e33b-471e-bc49-8b9662400e5f] FINISH, GetHostJobsVDSCommand,
return:
{0aa2d519-8130-4e2f-bc4f-892e5f7b5206=HostJobInfo:{id='0aa2d519-8130-4e2f-bc4f-892e5f7b5206',
type='storage', description='copy_data', status='failed',
progress='79',
error='VDSError:{code='GeneralException', message='General Exception:
("Command ['/usr/bin/qemu-img', 'convert', '-p',
'-t', 'none', '-T',
'none', '-f', 'raw', '-O', 'qcow2', '-o',
'compat=1.1',
'/rhev/data-center/mnt/svc-int-prd-sto-fr-301.hostics.fr:_volume1_ovirt-int-2_data/1ce95c4a-2ec5-47b7-bd24-e540165c6718/images/d3c33cc7-f2c3-4613-84d0-d3c9fa3d5ebd/2c4a0041-b18b-408f-9c0d-971c19a552ea',
'/rhev/data-center/mnt/blockSD/b5dc9c01-3749-4326-99c5-f84f683190bd/images/d3c33cc7-f2c3-4613-84d0-d3c9fa3d5ebd/2c4a0041-b18b-408f-9c0d-971c19a552ea']
failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while writing at
byte 13639873536: No space left on device\\n')",)'}'}}, log id:
73f77495
2022-07-21 08:58:23,241+02 INFO
[org.ovirt.engine.core.bll.StorageJobCallback]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48)
[65fed1dc-e33b-471e-bc49-8b9662400e5f] Command CopyData id:
'521bdf57-8379-40ce-a682-af859fb0cad7': job
'0aa2d519-8130-4e2f-bc4f-892e5f7b5206' execution was completed with VDSM
job status 'failed'
I do want the conversion from raw/sparse to qcow2/sparse to happen, as I
want to activate incremental backups.
I think that it may fail because the virtual size is bigger than the
initial size, as I think someone as explained on this list earlier? Can
anybody confirm?
It seems to be a pretty common use case to support though?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<
https://interactiv-group.com/disclaimer.html>