Libvirt ERROR cannot access backing file after importing VM from OpenStack

Dear oVirt team, When trying to start imported VM, it fails with following message: ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. Platform details: Ovirt SHE Version 4.2.2.6-1.el7.centos GlusterFS, unmanaged by oVirt. VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address): 2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e 2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5 2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109 2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109 2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' 2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions. 2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS 2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' 2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517 2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673 2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}' 2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS Than trying to start VM fails with following messages: 2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06). 2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down' 2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. 2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment 2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06. 2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz). Checking on the Gluster volume, directory and files exist, permissions are in order: [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089] -rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba -rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease -rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts: [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba file format: qcow2 virtual size: 160G (171798691840 bytes) disk size: 14G cluster_size: 65536 backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt? Kindly awaiting your reply. Marko Vrgotic

On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com> wrote:
Dear oVirt team,
When trying to start imported VM, it fails with following message:
ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
Platform details:
Ovirt SHE
Version 4.2.2.6-1.el7.centos
GlusterFS, unmanaged by oVirt.
VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address):
2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e
2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5
2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109
2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109
2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions.
2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS
2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517
2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673
2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}'
2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS
Than trying to start VM fails with following messages:
2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06).
2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down'
2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment
2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06.
2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz).
Checking on the Gluster volume, directory and files exist, permissions are in order:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]
-rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba
-rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease
-rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta
Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba
image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba
file format: qcow2
virtual size: 160G (171798691840 bytes)
disk size: 14G
cluster_size: 65536
*backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce*
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt?
Sure this qcow2 file will not work in oVirt. I wonder how you did the import? Nir

Dear Nir, Thank you for quick reply. Ok, why it will not work? I used qemu+tcp connection, via import method through engine admin UI. Images was imported and converted according logs, still “backing file” invalid entry remained. Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file. Image format is qcow(2) which is supported by oVirt. What am I missing? Should I use different method? Kindly awaiting your reply. — — — Met vriendelijke groet / Best regards, Marko Vrgotic Sr. System Engineer ActiveVideo Tel. +31 (0)35 677 4131 email: m.vrgotic@activevideo.com skype: av.mvrgotic.se www.activevideo.com ________________________________ From: Nir Soffer <nsoffer@redhat.com> Sent: Thursday, May 24, 2018 4:09:40 PM To: Vrgotic, Marko Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: Dear oVirt team, When trying to start imported VM, it fails with following message: ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. Platform details: Ovirt SHE Version 4.2.2.6-1.el7.centos GlusterFS, unmanaged by oVirt. VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address): 2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e 2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5 2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109 2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109 2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' 2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions. 2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS 2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' 2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517 2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673 2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}' 2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS Than trying to start VM fails with following messages: 2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06). 2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down' 2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. 2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment 2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06. 2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz). Checking on the Gluster volume, directory and files exist, permissions are in order: [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089] -rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba -rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease -rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts: [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba file format: qcow2 virtual size: 160G (171798691840 bytes) disk size: 14G cluster_size: 65536 backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt? Sure this qcow2 file will not work in oVirt. I wonder how you did the import? Nir

On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com> wrote:
Dear Nir,
Thank you for quick reply.
Ok, why it will not work?
Because the image has a backing file which is not accessible to oVirt.
I used qemu+tcp connection, via import method through engine admin UI.
Images was imported and converted according logs, still “backing file” invalid entry remained.
Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file.
Image format is qcow(2) which is supported by oVirt.
What am I missing? Should I use different method?
I guess this is not a problem on your side, but a bug in our side. Either we should block the operation that cannot work, or fix the process so we don't refer to non-existing image. When importing we have 2 options: - import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image. - import the current state of the image into a new image, using either raw or qcow2, but without any backing file. Arik, do you know why we create qcow2 file with invalid backing file? Nir
Kindly awaiting your reply.
— — — Met vriendelijke groet / Best regards,
Marko Vrgotic Sr. System Engineer ActiveVideo
Tel. +31 (0)35 677 4131 <+31%2035%20677%204131> email: m.vrgotic@activevideo.com skype: av.mvrgotic.se www.activevideo.com ------------------------------ *From:* Nir Soffer <nsoffer@redhat.com> *Sent:* Thursday, May 24, 2018 4:09:40 PM *To:* Vrgotic, Marko *Cc:* users@ovirt.org; Richard W.M. Jones; Arik Hadas *Subject:* Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack
On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com> wrote:
Dear oVirt team,
When trying to start imported VM, it fails with following message:
ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
Platform details:
Ovirt SHE
Version 4.2.2.6-1.el7.centos
GlusterFS, unmanaged by oVirt.
VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address):
2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e
2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5
2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109
2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109
2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions.
2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS
2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517
2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673
2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}'
2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS
Than trying to start VM fails with following messages:
2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06).
2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down'
2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment
2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06.
2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz).
Checking on the Gluster volume, directory and files exist, permissions are in order:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]
-rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba
-rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease
-rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta
Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba
image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba
file format: qcow2
virtual size: 160G (171798691840 bytes)
disk size: 14G
cluster_size: 65536
*backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce*
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt?
Sure this qcow2 file will not work in oVirt.
I wonder how you did the import?
Nir

Dear Nir, I believe i understand now. The image imported is not base image, but required backing file to be able to work properly. Maybe silly move, but i have tried to “solve/workaround” around the problem by rebasing image to remove backing file dependency, but it’s clear now why I than saw that “no bootable device found” during imported VM boot. I support you suggestion to solve the import by either importing complete chain or recreating image so in a way it’s independent from former chain. If you decide to go this way, please let me know which issue to track and if you need any more data provided from me. I still need to solve problem with 200+ VM wanting to move to oVirt. Kindly awaiting further updates. — — — Met vriendelijke groet / Best regards, Marko Vrgotic Sr. System Engineer ActiveVideo Tel. +31 (0)35 677 4131 email: m.vrgotic@activevideo.com skype: av.mvrgotic.se www.activevideo.com ________________________________ From: Nir Soffer <nsoffer@redhat.com> Sent: Thursday, May 24, 2018 5:13:47 PM To: Vrgotic, Marko Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: Dear Nir, Thank you for quick reply. Ok, why it will not work? Because the image has a backing file which is not accessible to oVirt. I used qemu+tcp connection, via import method through engine admin UI. Images was imported and converted according logs, still “backing file” invalid entry remained. Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file. Image format is qcow(2) which is supported by oVirt. What am I missing? Should I use different method? I guess this is not a problem on your side, but a bug in our side. Either we should block the operation that cannot work, or fix the process so we don't refer to non-existing image. When importing we have 2 options: - import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image. - import the current state of the image into a new image, using either raw or qcow2, but without any backing file. Arik, do you know why we create qcow2 file with invalid backing file? Nir Kindly awaiting your reply. — — — Met vriendelijke groet / Best regards, Marko Vrgotic Sr. System Engineer ActiveVideo Tel. +31 (0)35 677 4131<tel:+31%2035%20677%204131> email: m.vrgotic@activevideo.com<mailto:m.vrgotic@activevideo.com> skype: av.mvrgotic.se<http://av.mvrgotic.se> www.activevideo.com<http://www.activevideo.com> ________________________________ From: Nir Soffer <nsoffer@redhat.com<mailto:nsoffer@redhat.com>> Sent: Thursday, May 24, 2018 4:09:40 PM To: Vrgotic, Marko Cc: users@ovirt.org<mailto:users@ovirt.org>; Richard W.M. Jones; Arik Hadas Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: Dear oVirt team, When trying to start imported VM, it fails with following message: ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. Platform details: Ovirt SHE Version 4.2.2.6-1.el7.centos GlusterFS, unmanaged by oVirt. VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address): 2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e 2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5 2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109 2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109 2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' 2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions. 2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS 2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' 2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517 2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673 2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}' 2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS Than trying to start VM fails with following messages: 2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06). 2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down' 2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. 2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment 2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06. 2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz). Checking on the Gluster volume, directory and files exist, permissions are in order: [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089] -rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba -rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease -rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts: [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba file format: qcow2 virtual size: 160G (171798691840 bytes) disk size: 14G cluster_size: 65536 backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt? Sure this qcow2 file will not work in oVirt. I wonder how you did the import? Nir

Dear Nir, Arik and Richard, I hope discussion will continue somewhere where i am able to join as watcher at least. I have not seen any communication since Nir’s proposal. Please, if possible allow me to somehow track in which direction you are leaning. In the meantime, as experienced engineers, do you have any suggestion how I could “workaround” current problem? Rebasing image using qemu-img to remove backing file did not help (VM was able to start, but No Boot Device) and I think now that is due to image having functional dependency of base image. Like with VMs in oVrt, where template can not be deleted if there are still VMs existing, which were created from that template. Please advise — — — Met vriendelijke groet / Best regards, Marko Vrgotic Sr. System Engineer ActiveVideo Tel. +31 (0)35 677 4131 email: m.vrgotic@activevideo.com skype: av.mvrgotic.se www.activevideo.com ________________________________ From: Vrgotic, Marko Sent: Thursday, May 24, 2018 5:26:30 PM To: Nir Soffer Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack Dear Nir, I believe i understand now. The image imported is not base image, but required backing file to be able to work properly. Maybe silly move, but i have tried to “solve/workaround” around the problem by rebasing image to remove backing file dependency, but it’s clear now why I than saw that “no bootable device found” during imported VM boot. I support you suggestion to solve the import by either importing complete chain or recreating image so in a way it’s independent from former chain. If you decide to go this way, please let me know which issue to track and if you need any more data provided from me. I still need to solve problem with 200+ VM wanting to move to oVirt. Kindly awaiting further updates. — — — Met vriendelijke groet / Best regards, Marko Vrgotic Sr. System Engineer ActiveVideo Tel. +31 (0)35 677 4131 email: m.vrgotic@activevideo.com skype: av.mvrgotic.se www.activevideo.com ________________________________ From: Nir Soffer <nsoffer@redhat.com> Sent: Thursday, May 24, 2018 5:13:47 PM To: Vrgotic, Marko Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: Dear Nir, Thank you for quick reply. Ok, why it will not work? Because the image has a backing file which is not accessible to oVirt. I used qemu+tcp connection, via import method through engine admin UI. Images was imported and converted according logs, still “backing file” invalid entry remained. Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file. Image format is qcow(2) which is supported by oVirt. What am I missing? Should I use different method? I guess this is not a problem on your side, but a bug in our side. Either we should block the operation that cannot work, or fix the process so we don't refer to non-existing image. When importing we have 2 options: - import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image. - import the current state of the image into a new image, using either raw or qcow2, but without any backing file. Arik, do you know why we create qcow2 file with invalid backing file? Nir Kindly awaiting your reply. — — — Met vriendelijke groet / Best regards, Marko Vrgotic Sr. System Engineer ActiveVideo Tel. +31 (0)35 677 4131<tel:+31%2035%20677%204131> email: m.vrgotic@activevideo.com<mailto:m.vrgotic@activevideo.com> skype: av.mvrgotic.se<http://av.mvrgotic.se> www.activevideo.com<http://www.activevideo.com> ________________________________ From: Nir Soffer <nsoffer@redhat.com<mailto:nsoffer@redhat.com>> Sent: Thursday, May 24, 2018 4:09:40 PM To: Vrgotic, Marko Cc: users@ovirt.org<mailto:users@ovirt.org>; Richard W.M. Jones; Arik Hadas Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: Dear oVirt team, When trying to start imported VM, it fails with following message: ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. Platform details: Ovirt SHE Version 4.2.2.6-1.el7.centos GlusterFS, unmanaged by oVirt. VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address): 2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e 2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5 2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109 2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109 2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' 2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions. 2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS 2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' 2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517 2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673 2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}' 2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS Than trying to start VM fails with following messages: 2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06). 2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down' 2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. 2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment 2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06. 2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz). Checking on the Gluster volume, directory and files exist, permissions are in order: [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089] -rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba -rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease -rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts: [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba file format: qcow2 virtual size: 160G (171798691840 bytes) disk size: 14G cluster_size: 65536 backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce Format specific information: compat: 1.1 lazy refcounts: false refcount bits: 16 corrupt: false Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt? Sure this qcow2 file will not work in oVirt. I wonder how you did the import? Nir

Hi, On Fri, 25 May 2018 08:11:13 +0000 "Vrgotic, Marko" <M.Vrgotic@activevideo.com> wrote:
Dear Nir, Arik and Richard,
I hope discussion will continue somewhere where i am able to join as watcher at least.
please open a bug on VDSM. This is something we need to deal with during import -- or at least prevent users from importing.
I have not seen any communication since Nir’s proposal. Please, if possible allow me to somehow track in which direction you are leaning.
In the meantime, as experienced engineers, do you have any suggestion how I could “workaround” current problem?
Rebasing image using qemu-img to remove backing file did not help (VM was able to start, but No Boot Device) and I think now that is due to image having functional dependency of base image.
What do you mean by rebasing? On which backing image did you rebase it? I'm not too familiar with openstack, but I'd suggest doing a 'qemu-img convert' on the disk in openstack to squash the backing change into new (and complete) image, assign this new disk to your VM and import it to oVirt. Tomas
Like with VMs in oVrt, where template can not be deleted if there are still VMs existing, which were created from that template.
Please advise
— — — Met vriendelijke groet / Best regards,
Marko Vrgotic Sr. System Engineer ActiveVideo
Tel. +31 (0)35 677 4131 email: m.vrgotic@activevideo.com skype: av.mvrgotic.se www.activevideo.com ________________________________ From: Vrgotic, Marko Sent: Thursday, May 24, 2018 5:26:30 PM To: Nir Soffer Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack
Dear Nir,
I believe i understand now. The image imported is not base image, but required backing file to be able to work properly.
Maybe silly move, but i have tried to “solve/workaround” around the problem by rebasing image to remove backing file dependency, but it’s clear now why I than saw that “no bootable device found” during imported VM boot.
I support you suggestion to solve the import by either importing complete chain or recreating image so in a way it’s independent from former chain.
If you decide to go this way, please let me know which issue to track and if you need any more data provided from me.
I still need to solve problem with 200+ VM wanting to move to oVirt.
Kindly awaiting further updates.
— — — Met vriendelijke groet / Best regards,
Marko Vrgotic Sr. System Engineer ActiveVideo
Tel. +31 (0)35 677 4131 email: m.vrgotic@activevideo.com skype: av.mvrgotic.se www.activevideo.com ________________________________ From: Nir Soffer <nsoffer@redhat.com> Sent: Thursday, May 24, 2018 5:13:47 PM To: Vrgotic, Marko Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack
On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: Dear Nir,
Thank you for quick reply.
Ok, why it will not work?
Because the image has a backing file which is not accessible to oVirt.
I used qemu+tcp connection, via import method through engine admin UI.
Images was imported and converted according logs, still “backing file” invalid entry remained.
Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file.
Image format is qcow(2) which is supported by oVirt.
What am I missing? Should I use different method?
I guess this is not a problem on your side, but a bug in our side.
Either we should block the operation that cannot work, or fix the process so we don't refer to non-existing image.
When importing we have 2 options:
- import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image.
- import the current state of the image into a new image, using either raw or qcow2, but without any backing file.
Arik, do you know why we create qcow2 file with invalid backing file?
Nir
Kindly awaiting your reply.
— — — Met vriendelijke groet / Best regards,
Marko Vrgotic Sr. System Engineer ActiveVideo
Tel. +31 (0)35 677 4131<tel:+31%2035%20677%204131> email: m.vrgotic@activevideo.com<mailto:m.vrgotic@activevideo.com> skype: av.mvrgotic.se<http://av.mvrgotic.se> www.activevideo.com<http://www.activevideo.com> ________________________________ From: Nir Soffer <nsoffer@redhat.com<mailto:nsoffer@redhat.com>> Sent: Thursday, May 24, 2018 4:09:40 PM To: Vrgotic, Marko Cc: users@ovirt.org<mailto:users@ovirt.org>; Richard W.M. Jones; Arik Hadas Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack
On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote:
Dear oVirt team,
When trying to start imported VM, it fails with following message:
ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
Platform details:
Ovirt SHE
Version 4.2.2.6-1.el7.centos
GlusterFS, unmanaged by oVirt.
VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address):
2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e
2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5
2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109
2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109
2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions.
2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS
2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517
2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673
2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}'
2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS
Than trying to start VM fails with following messages:
2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06).
2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down'
2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment
2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06.
2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz).
Checking on the Gluster volume, directory and files exist, permissions are in order:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]
-rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba
-rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease
-rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta
Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba
image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba
file format: qcow2
virtual size: 160G (171798691840 bytes)
disk size: 14G
cluster_size: 65536
backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt?
Sure this qcow2 file will not work in oVirt.
I wonder how you did the import?
Nir
-- Tomáš Golembiovský <tgolembi@redhat.com>

On 25/05/2018, 12:29, "Tomáš Golembiovský" <tgolembi@redhat.com> wrote: Hi, On Fri, 25 May 2018 08:11:13 +0000 "Vrgotic, Marko" <M.Vrgotic@activevideo.com> wrote: > Dear Nir, Arik and Richard, > > I hope discussion will continue somewhere where i am able to join as watcher at least. please open a bug on VDSM. This is something we need to deal with during import -- or at least prevent users from importing. [Marko] Where? Email to users@ovirt.org? Do you need me to provide more information than in this email or additional information? If possible, go with "deal with", instead of just preventing. My team very much enjoys oVirt platform and its functionality and we would love to see it grow further, internally and externally. > I have not seen any communication since Nir’s proposal. Please, if possible allow me to somehow track in which direction you are leaning. > > In the meantime, as experienced engineers, do you have any suggestion how I could “workaround” current problem? > > Rebasing image using qemu-img to remove backing file did not help (VM was able to start, but No Boot Device) and I think now that is due to image having functional dependency of base image. What do you mean by rebasing? On which backing image did you rebase it? [Marko] It was using unsafe mode - just wanted to see what results will I get if I remove the backing file (inexperienced move). This action result in being able to start the VM, but ending up in No Bootable Device. I'm not too familiar with openstack, but I'd suggest doing a 'qemu-img convert' on the disk in openstack to squash the backing change into new (and complete) image, assign this new disk to your VM and import it to oVirt. [Marko] Thank you. We will test it and check the result. Tomas > > Like with VMs in oVrt, where template can not be deleted if there are still VMs existing, which were created from that template. > > Please advise > > — — — > Met vriendelijke groet / Best regards, > > Marko Vrgotic > Sr. System Engineer > ActiveVideo > > Tel. +31 (0)35 677 4131 > email: m.vrgotic@activevideo.com > skype: av.mvrgotic.se > www.activevideo.com > ________________________________ > From: Vrgotic, Marko > Sent: Thursday, May 24, 2018 5:26:30 PM > To: Nir Soffer > Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > Dear Nir, > > I believe i understand now. The image imported is not base image, but required backing file to be able to work properly. > > Maybe silly move, but i have tried to “solve/workaround” around the problem by rebasing image to remove backing file dependency, but it’s clear now why I than saw that “no bootable device found” during imported VM boot. > > I support you suggestion to solve the import by either importing complete chain or recreating image so in a way it’s independent from former chain. > > If you decide to go this way, please let me know which issue to track and if you need any more data provided from me. > > I still need to solve problem with 200+ VM wanting to move to oVirt. > > Kindly awaiting further updates. > > — — — > Met vriendelijke groet / Best regards, > > Marko Vrgotic > Sr. System Engineer > ActiveVideo > > Tel. +31 (0)35 677 4131 > email: m.vrgotic@activevideo.com > skype: av.mvrgotic.se > www.activevideo.com > ________________________________ > From: Nir Soffer <nsoffer@redhat.com> > Sent: Thursday, May 24, 2018 5:13:47 PM > To: Vrgotic, Marko > Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: > Dear Nir, > > Thank you for quick reply. > > Ok, why it will not work? > > Because the image has a backing file which is not accessible to oVirt. > > I used qemu+tcp connection, via import method through engine admin UI. > > Images was imported and converted according logs, still “backing file” invalid entry remained. > > Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file. > > Image format is qcow(2) which is supported by oVirt. > > What am I missing? Should I use different method? > > I guess this is not a problem on your side, but a bug in our side. > > Either we should block the operation that cannot work, or fix the process > so we don't refer to non-existing image. > > When importing we have 2 options: > > - import the entire chain, importing all images in the chain, converting > each image to oVirt volume, and updating the backing file of each layer > to point to the oVirt image. > > - import the current state of the image into a new image, using either raw > or qcow2, but without any backing file. > > Arik, do you know why we create qcow2 file with invalid backing file? > > Nir > > > Kindly awaiting your reply. > > — — — > Met vriendelijke groet / Best regards, > > Marko Vrgotic > Sr. System Engineer > ActiveVideo > > Tel. +31 (0)35 677 4131<tel:+31%2035%20677%204131> > email: m.vrgotic@activevideo.com<mailto:m.vrgotic@activevideo.com> > skype: av.mvrgotic.se<http://av.mvrgotic.se> > www.activevideo.com<http://www.activevideo.com> > ________________________________ > From: Nir Soffer <nsoffer@redhat.com<mailto:nsoffer@redhat.com>> > Sent: Thursday, May 24, 2018 4:09:40 PM > To: Vrgotic, Marko > Cc: users@ovirt.org<mailto:users@ovirt.org>; Richard W.M. Jones; Arik Hadas > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > > > On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: > > Dear oVirt team, > > > > When trying to start imported VM, it fails with following message: > > > > ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. > > > > Platform details: > > Ovirt SHE > > Version 4.2.2.6-1.el7.centos > > GlusterFS, unmanaged by oVirt. > > > > VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address): > > 2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e > > 2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5 > > 2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109 > > 2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109 > > 2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' > > 2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions. > > 2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS > > 2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' > > 2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517 > > 2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673 > > 2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}' > > 2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS > > > > Than trying to start VM fails with following messages: > > 2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06). > > 2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down' > > 2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. > > 2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment > > 2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06. > > 2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz). > > > > Checking on the Gluster volume, directory and files exist, permissions are in order: > > > > [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089] > > -rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > -rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease > > -rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta > > > > Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts: > > > > [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > file format: qcow2 > > virtual size: 160G (171798691840 bytes) > > disk size: 14G > > cluster_size: 65536 > > backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce > > Format specific information: > > compat: 1.1 > > lazy refcounts: false > > refcount bits: 16 > > corrupt: false > > > > Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt? > > Sure this qcow2 file will not work in oVirt. > > I wonder how you did the import? > > Nir > -- Tomáš Golembiovský <tgolembi@redhat.com>

On Fri, 25 May 2018 12:13:33 +0000 "Vrgotic, Marko" <M.Vrgotic@activevideo.com> wrote:
On 25/05/2018, 12:29, "Tomáš Golembiovský" <tgolembi@redhat.com> wrote:
Hi,
On Fri, 25 May 2018 08:11:13 +0000 "Vrgotic, Marko" <M.Vrgotic@activevideo.com> wrote:
> Dear Nir, Arik and Richard, > > I hope discussion will continue somewhere where i am able to join as watcher at least.
please open a bug on VDSM. This is something we need to deal with during import -- or at least prevent users from importing. [Marko] Where? Email to users@ovirt.org? Do you need me to provide more information than in this email or additional information?
Here in Bugzilla: https://bugzilla.redhat.com/enter_bug.cgi?product=vdsm Including the info you had in your first email should be enough for now.
If possible, go with "deal with", instead of just preventing. My team very much enjoys oVirt platform and its functionality and we would love to see it grow further, internally and externally.
Could you please elaborate on your specific use-case here? Where did the backing image came from? Snapshots, use of templates, ...?
> I have not seen any communication since Nir’s proposal. Please, if possible allow me to somehow track in which direction you are leaning. > > In the meantime, as experienced engineers, do you have any suggestion how I could “workaround” current problem? > > Rebasing image using qemu-img to remove backing file did not help (VM was able to start, but No Boot Device) and I think now that is due to image having functional dependency of base image.
What do you mean by rebasing? On which backing image did you rebase it? [Marko] It was using unsafe mode - just wanted to see what results will I get if I remove the backing file (inexperienced move). This action result in being able to start the VM, but ending up in No Bootable Device.
I guess you ended up with a disk full of holes in it and boot sector was one of the missing pieces. Tomas
I'm not too familiar with openstack, but I'd suggest doing a 'qemu-img convert' on the disk in openstack to squash the backing change into new (and complete) image, assign this new disk to your VM and import it to oVirt. [Marko] Thank you. We will test it and check the result.
Tomas
> > Like with VMs in oVrt, where template can not be deleted if there are still VMs existing, which were created from that template. > > Please advise > > — — — > Met vriendelijke groet / Best regards, > > Marko Vrgotic > Sr. System Engineer > ActiveVideo > > Tel. +31 (0)35 677 4131 > email: m.vrgotic@activevideo.com > skype: av.mvrgotic.se > www.activevideo.com > ________________________________ > From: Vrgotic, Marko > Sent: Thursday, May 24, 2018 5:26:30 PM > To: Nir Soffer > Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > Dear Nir, > > I believe i understand now. The image imported is not base image, but required backing file to be able to work properly. > > Maybe silly move, but i have tried to “solve/workaround” around the problem by rebasing image to remove backing file dependency, but it’s clear now why I than saw that “no bootable device found” during imported VM boot. > > I support you suggestion to solve the import by either importing complete chain or recreating image so in a way it’s independent from former chain. > > If you decide to go this way, please let me know which issue to track and if you need any more data provided from me. > > I still need to solve problem with 200+ VM wanting to move to oVirt. > > Kindly awaiting further updates. > > — — — > Met vriendelijke groet / Best regards, > > Marko Vrgotic > Sr. System Engineer > ActiveVideo > > Tel. +31 (0)35 677 4131 > email: m.vrgotic@activevideo.com > skype: av.mvrgotic.se > www.activevideo.com > ________________________________ > From: Nir Soffer <nsoffer@redhat.com> > Sent: Thursday, May 24, 2018 5:13:47 PM > To: Vrgotic, Marko > Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: > Dear Nir, > > Thank you for quick reply. > > Ok, why it will not work? > > Because the image has a backing file which is not accessible to oVirt. > > I used qemu+tcp connection, via import method through engine admin UI. > > Images was imported and converted according logs, still “backing file” invalid entry remained. > > Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file. > > Image format is qcow(2) which is supported by oVirt. > > What am I missing? Should I use different method? > > I guess this is not a problem on your side, but a bug in our side. > > Either we should block the operation that cannot work, or fix the process > so we don't refer to non-existing image. > > When importing we have 2 options: > > - import the entire chain, importing all images in the chain, converting > each image to oVirt volume, and updating the backing file of each layer > to point to the oVirt image. > > - import the current state of the image into a new image, using either raw > or qcow2, but without any backing file. > > Arik, do you know why we create qcow2 file with invalid backing file? > > Nir > > > Kindly awaiting your reply. > > — — — > Met vriendelijke groet / Best regards, > > Marko Vrgotic > Sr. System Engineer > ActiveVideo > > Tel. +31 (0)35 677 4131<tel:+31%2035%20677%204131> > email: m.vrgotic@activevideo.com<mailto:m.vrgotic@activevideo.com> > skype: av.mvrgotic.se<http://av.mvrgotic.se> > www.activevideo.com<http://www.activevideo.com> > ________________________________ > From: Nir Soffer <nsoffer@redhat.com<mailto:nsoffer@redhat.com>> > Sent: Thursday, May 24, 2018 4:09:40 PM > To: Vrgotic, Marko > Cc: users@ovirt.org<mailto:users@ovirt.org>; Richard W.M. Jones; Arik Hadas > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > > > On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: > > Dear oVirt team, > > > > When trying to start imported VM, it fails with following message: > > > > ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. > > > > Platform details: > > Ovirt SHE > > Version 4.2.2.6-1.el7.centos > > GlusterFS, unmanaged by oVirt. > > > > VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address): > > 2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e > > 2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5 > > 2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109 > > 2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109 > > 2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' > > 2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions. > > 2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS > > 2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' > > 2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517 > > 2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673 > > 2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}' > > 2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS > > > > Than trying to start VM fails with following messages: > > 2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06). > > 2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down' > > 2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. > > 2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment > > 2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06. > > 2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz). > > > > Checking on the Gluster volume, directory and files exist, permissions are in order: > > > > [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089] > > -rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > -rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease > > -rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta > > > > Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts: > > > > [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > file format: qcow2 > > virtual size: 160G (171798691840 bytes) > > disk size: 14G > > cluster_size: 65536 > > backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce > > Format specific information: > > compat: 1.1 > > lazy refcounts: false > > refcount bits: 16 > > corrupt: false > > > > Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt? > > Sure this qcow2 file will not work in oVirt. > > I wonder how you did the import? > > Nir >
-- Tomáš Golembiovský <tgolembi@redhat.com>
-- Tomáš Golembiovský <tgolembi@redhat.com>

Apologies for delay, I am reporting a bug atm. Will send the bug number id afterwards. On 25/05/2018, 16:57, "Tomáš Golembiovský" <tgolembi@redhat.com> wrote: On Fri, 25 May 2018 12:13:33 +0000 "Vrgotic, Marko" <M.Vrgotic@activevideo.com> wrote: > > > On 25/05/2018, 12:29, "Tomáš Golembiovský" <tgolembi@redhat.com> wrote: > > Hi, > > On Fri, 25 May 2018 08:11:13 +0000 > "Vrgotic, Marko" <M.Vrgotic@activevideo.com> wrote: > > > Dear Nir, Arik and Richard, > > > > I hope discussion will continue somewhere where i am able to join as watcher at least. > > please open a bug on VDSM. This is something we need to deal with during > import -- or at least prevent users from importing. > [Marko] Where? Email to users@ovirt.org? Do you need me to provide more information than in this email or additional information? Here in Bugzilla: https://bugzilla.redhat.com/enter_bug.cgi?product=vdsm Including the info you had in your first email should be enough for now. > If possible, go with "deal with", instead of just preventing. My team very much enjoys oVirt platform and its functionality and we would love to see it grow further, internally and externally. Could you please elaborate on your specific use-case here? Where did the backing image came from? Snapshots, use of templates, ...? > > > I have not seen any communication since Nir’s proposal. Please, if possible allow me to somehow track in which direction you are leaning. > > > > In the meantime, as experienced engineers, do you have any suggestion how I could “workaround” current problem? > > > > Rebasing image using qemu-img to remove backing file did not help (VM was able to start, but No Boot Device) and I think now that is due to image having functional dependency of base image. > > What do you mean by rebasing? On which backing image did you rebase it? > [Marko] It was using unsafe mode - just wanted to see what results will I get if I remove the backing file (inexperienced move). This action result in being able to start the VM, but ending up in No Bootable Device. I guess you ended up with a disk full of holes in it and boot sector was one of the missing pieces. Tomas > > I'm not too familiar with openstack, but I'd suggest doing a 'qemu-img convert' on the disk in openstack to squash the backing change into new (and complete) image, assign this new disk to your VM and import it to oVirt. > [Marko] Thank you. We will test it and check the result. > > Tomas > > > > > Like with VMs in oVrt, where template can not be deleted if there are still VMs existing, which were created from that template. > > > > Please advise > > > > — — — > > Met vriendelijke groet / Best regards, > > > > Marko Vrgotic > > Sr. System Engineer > > ActiveVideo > > > > Tel. +31 (0)35 677 4131 > > email: m.vrgotic@activevideo.com > > skype: av.mvrgotic.se > > www.activevideo.com > > ________________________________ > > From: Vrgotic, Marko > > Sent: Thursday, May 24, 2018 5:26:30 PM > > To: Nir Soffer > > Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas > > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > > > Dear Nir, > > > > I believe i understand now. The image imported is not base image, but required backing file to be able to work properly. > > > > Maybe silly move, but i have tried to “solve/workaround” around the problem by rebasing image to remove backing file dependency, but it’s clear now why I than saw that “no bootable device found” during imported VM boot. > > > > I support you suggestion to solve the import by either importing complete chain or recreating image so in a way it’s independent from former chain. > > > > If you decide to go this way, please let me know which issue to track and if you need any more data provided from me. > > > > I still need to solve problem with 200+ VM wanting to move to oVirt. > > > > Kindly awaiting further updates. > > > > — — — > > Met vriendelijke groet / Best regards, > > > > Marko Vrgotic > > Sr. System Engineer > > ActiveVideo > > > > Tel. +31 (0)35 677 4131 > > email: m.vrgotic@activevideo.com > > skype: av.mvrgotic.se > > www.activevideo.com > > ________________________________ > > From: Nir Soffer <nsoffer@redhat.com> > > Sent: Thursday, May 24, 2018 5:13:47 PM > > To: Vrgotic, Marko > > Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas > > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > > > On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: > > Dear Nir, > > > > Thank you for quick reply. > > > > Ok, why it will not work? > > > > Because the image has a backing file which is not accessible to oVirt. > > > > I used qemu+tcp connection, via import method through engine admin UI. > > > > Images was imported and converted according logs, still “backing file” invalid entry remained. > > > > Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file. > > > > Image format is qcow(2) which is supported by oVirt. > > > > What am I missing? Should I use different method? > > > > I guess this is not a problem on your side, but a bug in our side. > > > > Either we should block the operation that cannot work, or fix the process > > so we don't refer to non-existing image. > > > > When importing we have 2 options: > > > > - import the entire chain, importing all images in the chain, converting > > each image to oVirt volume, and updating the backing file of each layer > > to point to the oVirt image. > > > > - import the current state of the image into a new image, using either raw > > or qcow2, but without any backing file. > > > > Arik, do you know why we create qcow2 file with invalid backing file? > > > > Nir > > > > > > Kindly awaiting your reply. > > > > — — — > > Met vriendelijke groet / Best regards, > > > > Marko Vrgotic > > Sr. System Engineer > > ActiveVideo > > > > Tel. +31 (0)35 677 4131<tel:+31%2035%20677%204131> > > email: m.vrgotic@activevideo.com<mailto:m.vrgotic@activevideo.com> > > skype: av.mvrgotic.se<http://av.mvrgotic.se> > > www.activevideo.com<http://www.activevideo.com> > > ________________________________ > > From: Nir Soffer <nsoffer@redhat.com<mailto:nsoffer@redhat.com>> > > Sent: Thursday, May 24, 2018 4:09:40 PM > > To: Vrgotic, Marko > > Cc: users@ovirt.org<mailto:users@ovirt.org>; Richard W.M. Jones; Arik Hadas > > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > > > > > > > On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: > > > > Dear oVirt team, > > > > > > > > When trying to start imported VM, it fails with following message: > > > > > > > > ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. > > > > > > > > Platform details: > > > > Ovirt SHE > > > > Version 4.2.2.6-1.el7.centos > > > > GlusterFS, unmanaged by oVirt. > > > > > > > > VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address): > > > > 2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e > > > > 2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5 > > > > 2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109 > > > > 2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109 > > > > 2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' > > > > 2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions. > > > > 2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS > > > > 2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' > > > > 2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517 > > > > 2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673 > > > > 2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}' > > > > 2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS > > > > > > > > Than trying to start VM fails with following messages: > > > > 2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06). > > > > 2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down' > > > > 2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. > > > > 2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment > > > > 2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06. > > > > 2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz). > > > > > > > > Checking on the Gluster volume, directory and files exist, permissions are in order: > > > > > > > > [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089] > > > > -rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > > > -rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease > > > > -rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta > > > > > > > > Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts: > > > > > > > > [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > > > image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > > > file format: qcow2 > > > > virtual size: 160G (171798691840 bytes) > > > > disk size: 14G > > > > cluster_size: 65536 > > > > backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce > > > > Format specific information: > > > > compat: 1.1 > > > > lazy refcounts: false > > > > refcount bits: 16 > > > > corrupt: false > > > > > > > > Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt? > > > > Sure this qcow2 file will not work in oVirt. > > > > I wonder how you did the import? > > > > Nir > > > > > -- > Tomáš Golembiovský <tgolembi@redhat.com> > > -- Tomáš Golembiovský <tgolembi@redhat.com>

Bug submitted: https://bugzilla.redhat.com/show_bug.cgi?id=1583176 Kind regards, Marko On 25/05/2018, 16:57, "Tomáš Golembiovský" <tgolembi@redhat.com> wrote: On Fri, 25 May 2018 12:13:33 +0000 "Vrgotic, Marko" <M.Vrgotic@activevideo.com> wrote: > > > On 25/05/2018, 12:29, "Tomáš Golembiovský" <tgolembi@redhat.com> wrote: > > Hi, > > On Fri, 25 May 2018 08:11:13 +0000 > "Vrgotic, Marko" <M.Vrgotic@activevideo.com> wrote: > > > Dear Nir, Arik and Richard, > > > > I hope discussion will continue somewhere where i am able to join as watcher at least. > > please open a bug on VDSM. This is something we need to deal with during > import -- or at least prevent users from importing. > [Marko] Where? Email to users@ovirt.org? Do you need me to provide more information than in this email or additional information? Here in Bugzilla: https://bugzilla.redhat.com/enter_bug.cgi?product=vdsm Including the info you had in your first email should be enough for now. > If possible, go with "deal with", instead of just preventing. My team very much enjoys oVirt platform and its functionality and we would love to see it grow further, internally and externally. Could you please elaborate on your specific use-case here? Where did the backing image came from? Snapshots, use of templates, ...? > > > I have not seen any communication since Nir’s proposal. Please, if possible allow me to somehow track in which direction you are leaning. > > > > In the meantime, as experienced engineers, do you have any suggestion how I could “workaround” current problem? > > > > Rebasing image using qemu-img to remove backing file did not help (VM was able to start, but No Boot Device) and I think now that is due to image having functional dependency of base image. > > What do you mean by rebasing? On which backing image did you rebase it? > [Marko] It was using unsafe mode - just wanted to see what results will I get if I remove the backing file (inexperienced move). This action result in being able to start the VM, but ending up in No Bootable Device. I guess you ended up with a disk full of holes in it and boot sector was one of the missing pieces. Tomas > > I'm not too familiar with openstack, but I'd suggest doing a 'qemu-img convert' on the disk in openstack to squash the backing change into new (and complete) image, assign this new disk to your VM and import it to oVirt. > [Marko] Thank you. We will test it and check the result. > > Tomas > > > > > Like with VMs in oVrt, where template can not be deleted if there are still VMs existing, which were created from that template. > > > > Please advise > > > > — — — > > Met vriendelijke groet / Best regards, > > > > Marko Vrgotic > > Sr. System Engineer > > ActiveVideo > > > > Tel. +31 (0)35 677 4131 > > email: m.vrgotic@activevideo.com > > skype: av.mvrgotic.se > > www.activevideo.com > > ________________________________ > > From: Vrgotic, Marko > > Sent: Thursday, May 24, 2018 5:26:30 PM > > To: Nir Soffer > > Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas > > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > > > Dear Nir, > > > > I believe i understand now. The image imported is not base image, but required backing file to be able to work properly. > > > > Maybe silly move, but i have tried to “solve/workaround” around the problem by rebasing image to remove backing file dependency, but it’s clear now why I than saw that “no bootable device found” during imported VM boot. > > > > I support you suggestion to solve the import by either importing complete chain or recreating image so in a way it’s independent from former chain. > > > > If you decide to go this way, please let me know which issue to track and if you need any more data provided from me. > > > > I still need to solve problem with 200+ VM wanting to move to oVirt. > > > > Kindly awaiting further updates. > > > > — — — > > Met vriendelijke groet / Best regards, > > > > Marko Vrgotic > > Sr. System Engineer > > ActiveVideo > > > > Tel. +31 (0)35 677 4131 > > email: m.vrgotic@activevideo.com > > skype: av.mvrgotic.se > > www.activevideo.com > > ________________________________ > > From: Nir Soffer <nsoffer@redhat.com> > > Sent: Thursday, May 24, 2018 5:13:47 PM > > To: Vrgotic, Marko > > Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas > > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > > > On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: > > Dear Nir, > > > > Thank you for quick reply. > > > > Ok, why it will not work? > > > > Because the image has a backing file which is not accessible to oVirt. > > > > I used qemu+tcp connection, via import method through engine admin UI. > > > > Images was imported and converted according logs, still “backing file” invalid entry remained. > > > > Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file. > > > > Image format is qcow(2) which is supported by oVirt. > > > > What am I missing? Should I use different method? > > > > I guess this is not a problem on your side, but a bug in our side. > > > > Either we should block the operation that cannot work, or fix the process > > so we don't refer to non-existing image. > > > > When importing we have 2 options: > > > > - import the entire chain, importing all images in the chain, converting > > each image to oVirt volume, and updating the backing file of each layer > > to point to the oVirt image. > > > > - import the current state of the image into a new image, using either raw > > or qcow2, but without any backing file. > > > > Arik, do you know why we create qcow2 file with invalid backing file? > > > > Nir > > > > > > Kindly awaiting your reply. > > > > — — — > > Met vriendelijke groet / Best regards, > > > > Marko Vrgotic > > Sr. System Engineer > > ActiveVideo > > > > Tel. +31 (0)35 677 4131<tel:+31%2035%20677%204131> > > email: m.vrgotic@activevideo.com<mailto:m.vrgotic@activevideo.com> > > skype: av.mvrgotic.se<http://av.mvrgotic.se> > > www.activevideo.com<http://www.activevideo.com> > > ________________________________ > > From: Nir Soffer <nsoffer@redhat.com<mailto:nsoffer@redhat.com>> > > Sent: Thursday, May 24, 2018 4:09:40 PM > > To: Vrgotic, Marko > > Cc: users@ovirt.org<mailto:users@ovirt.org>; Richard W.M. Jones; Arik Hadas > > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack > > > > > > > > On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com<mailto:M.Vrgotic@activevideo.com>> wrote: > > > > Dear oVirt team, > > > > > > > > When trying to start imported VM, it fails with following message: > > > > > > > > ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. > > > > > > > > Platform details: > > > > Ovirt SHE > > > > Version 4.2.2.6-1.el7.centos > > > > GlusterFS, unmanaged by oVirt. > > > > > > > > VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address): > > > > 2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e > > > > 2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5 > > > > 2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109 > > > > 2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109 > > > > 2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' > > > > 2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions. > > > > 2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS > > > > 2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}' > > > > 2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517 > > > > 2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673 > > > > 2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}' > > > > 2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS > > > > > > > > Than trying to start VM fails with following messages: > > > > 2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06). > > > > 2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down' > > > > 2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory. > > > > 2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment > > > > 2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06. > > > > 2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz). > > > > > > > > Checking on the Gluster volume, directory and files exist, permissions are in order: > > > > > > > > [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089] > > > > -rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > > > -rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease > > > > -rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta > > > > > > > > Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts: > > > > > > > > [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > > > image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba > > > > file format: qcow2 > > > > virtual size: 160G (171798691840 bytes) > > > > disk size: 14G > > > > cluster_size: 65536 > > > > backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce > > > > Format specific information: > > > > compat: 1.1 > > > > lazy refcounts: false > > > > refcount bits: 16 > > > > corrupt: false > > > > > > > > Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt? > > > > Sure this qcow2 file will not work in oVirt. > > > > I wonder how you did the import? > > > > Nir > > > > > -- > Tomáš Golembiovský <tgolembi@redhat.com> > > -- Tomáš Golembiovský <tgolembi@redhat.com>

On Thu, May 24, 2018 at 6:13 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com> wrote:
Dear Nir,
Thank you for quick reply.
Ok, why it will not work?
Because the image has a backing file which is not accessible to oVirt.
I used qemu+tcp connection, via import method through engine admin UI.
Images was imported and converted according logs, still “backing file” invalid entry remained.
Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file.
Image format is qcow(2) which is supported by oVirt.
What am I missing? Should I use different method?
I guess this is not a problem on your side, but a bug in our side.
Either we should block the operation that cannot work, or fix the process so we don't refer to non-existing image.
When importing we have 2 options:
- import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image.
- import the current state of the image into a new image, using either raw or qcow2, but without any backing file.
Arik, do you know why we create qcow2 file with invalid backing file?
It seems to be a result of a bit naive behavior of the kvm2ovirt module that tries to download only the top-level volume the VM uses, assuming each of the disks to be imported is comprised of a single volume. Maybe it's time to finally asking QEMU guys to provide a way to consume the 'collapsed' form of a chain of volumes as a stream if that's not available yet? ;) It can also boost the recently added process of exporting VMs as OVAs...
Nir
Kindly awaiting your reply.
— — — Met vriendelijke groet / Best regards,
Marko Vrgotic Sr. System Engineer ActiveVideo
Tel. +31 (0)35 677 4131 <+31%2035%20677%204131> email: m.vrgotic@activevideo.com skype: av.mvrgotic.se www.activevideo.com ------------------------------ *From:* Nir Soffer <nsoffer@redhat.com> *Sent:* Thursday, May 24, 2018 4:09:40 PM *To:* Vrgotic, Marko *Cc:* users@ovirt.org; Richard W.M. Jones; Arik Hadas *Subject:* Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack
On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com> wrote:
Dear oVirt team,
When trying to start imported VM, it fails with following message:
ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome. lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/ 816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
Platform details:
Ovirt SHE
Version 4.2.2.6-1.el7.centos
GlusterFS, unmanaged by oVirt.
VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address):
2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e
2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5
2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109
2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109
2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport. ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions.
2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS
2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport. ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517
2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673
2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}'
2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS
Than trying to start VM fails with following messages:
2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06).
2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down'
2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/ 2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome. lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/ 816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment
2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06.
2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz).
Checking on the Gluster volume, directory and files exist, permissions are in order:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]
-rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869- 0e20d7553aba
-rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869- 0e20d7553aba.lease
-rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869- 0e20d7553aba.meta
Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba
image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba
file format: qcow2
virtual size: 160G (171798691840 bytes)
disk size: 14G
cluster_size: 65536
*backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce*
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt?
Sure this qcow2 file will not work in oVirt.
I wonder how you did the import?
Nir

[ Adding qemu-block ] Am 27.05.2018 um 10:36 hat Arik Hadas geschrieben:
On Thu, May 24, 2018 at 6:13 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com> wrote:
Dear Nir,
Thank you for quick reply.
Ok, why it will not work?
Because the image has a backing file which is not accessible to oVirt.
I used qemu+tcp connection, via import method through engine admin UI.
Images was imported and converted according logs, still “backing file” invalid entry remained.
Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file.
Image format is qcow(2) which is supported by oVirt.
What am I missing? Should I use different method?
I guess this is not a problem on your side, but a bug in our side.
Either we should block the operation that cannot work, or fix the process so we don't refer to non-existing image.
When importing we have 2 options:
- import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image.
- import the current state of the image into a new image, using either raw or qcow2, but without any backing file.
Arik, do you know why we create qcow2 file with invalid backing file?
It seems to be a result of a bit naive behavior of the kvm2ovirt module that tries to download only the top-level volume the VM uses, assuming each of the disks to be imported is comprised of a single volume.
Maybe it's time to finally asking QEMU guys to provide a way to consume the 'collapsed' form of a chain of volumes as a stream if that's not available yet? ;) It can also boost the recently added process of exporting VMs as OVAs...
Not sure which operation we're talking about on the QEMU level, but generally the "collapsed" view is the normal thing because that's what guests see. For example, if you use 'qemu-img convert', you have to pass options to specifically disable it and convert only a single layer if you want to keep using backing files instead of getting a standalone image that contains everything. Kevin
Nir
Kindly awaiting your reply.
— — — Met vriendelijke groet / Best regards,
Marko Vrgotic Sr. System Engineer ActiveVideo
Tel. +31 (0)35 677 4131 <+31%2035%20677%204131> email: m.vrgotic@activevideo.com skype: av.mvrgotic.se www.activevideo.com ------------------------------ *From:* Nir Soffer <nsoffer@redhat.com> *Sent:* Thursday, May 24, 2018 4:09:40 PM *To:* Vrgotic, Marko *Cc:* users@ovirt.org; Richard W.M. Jones; Arik Hadas *Subject:* Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack
On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com> wrote:
Dear oVirt team,
When trying to start imported VM, it fails with following message:
ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome. lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/ 816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
Platform details:
Ovirt SHE
Version 4.2.2.6-1.el7.centos
GlusterFS, unmanaged by oVirt.
VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address):
2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e
2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5
2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109
2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109
2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport. ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions.
2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS
2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport. ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517
2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673
2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}'
2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS
Than trying to start VM fails with following messages:
2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06).
2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down'
2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/ 2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome. lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/ 816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment
2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06.
2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz).
Checking on the Gluster volume, directory and files exist, permissions are in order:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]
-rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869- 0e20d7553aba
-rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869- 0e20d7553aba.lease
-rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869- 0e20d7553aba.meta
Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba
image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba
file format: qcow2
virtual size: 160G (171798691840 bytes)
disk size: 14G
cluster_size: 65536
*backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce*
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt?
Sure this qcow2 file will not work in oVirt.
I wonder how you did the import?
Nir

On Mon, May 28, 2018 at 11:25 AM, Kevin Wolf <kwolf@redhat.com> wrote:
[ Adding qemu-block ]
On Thu, May 24, 2018 at 6:13 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko < M.Vrgotic@activevideo.com> wrote:
Dear Nir,
Thank you for quick reply.
Ok, why it will not work?
Because the image has a backing file which is not accessible to oVirt.
I used qemu+tcp connection, via import method through engine admin UI.
Images was imported and converted according logs, still “backing file” invalid entry remained.
Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file.
Image format is qcow(2) which is supported by oVirt.
What am I missing? Should I use different method?
I guess this is not a problem on your side, but a bug in our side.
Either we should block the operation that cannot work, or fix the
so we don't refer to non-existing image.
When importing we have 2 options:
- import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image.
- import the current state of the image into a new image, using either raw or qcow2, but without any backing file.
Arik, do you know why we create qcow2 file with invalid backing file?
It seems to be a result of a bit naive behavior of the kvm2ovirt module that tries to download only the top-level volume the VM uses, assuming each of the disks to be imported is comprised of a single volume.
Maybe it's time to finally asking QEMU guys to provide a way to consume
Am 27.05.2018 um 10:36 hat Arik Hadas geschrieben: process the
'collapsed' form of a chain of volumes as a stream if that's not available yet? ;) It can also boost the recently added process of exporting VMs as OVAs...
Not sure which operation we're talking about on the QEMU level, but generally the "collapsed" view is the normal thing because that's what guests see.
For example, if you use 'qemu-img convert', you have to pass options to specifically disable it and convert only a single layer if you want to keep using backing files instead of getting a standalone image that contains everything.
Yeah, some context was missing. Sorry about that. Let me demonstrate briefly the flow for OVA: Let's say that we have a VM that is based on a template and has one disk and one snapshot, so its volume-chain would be: T -> S -> V (V is the volume the VM writes to, S is the backing file of V and T is the backing file of S). When exporting that VM to an OVA file we want the produced tar file to be comprised of: (1) OVF configuration (2) single disk volume (preferably qcow). So we need to collapse T, S, V into a single volume. Sure, we can do 'qemu-img convert'. That's what we do now in oVirt 4.2: (a) qemu-img convert produces a 'temporary' collapsed volume (b) make a tar file of the OVf configuration and that 'temporary' volume (c) delete the temporary volume But the fact that we produce that 'temporary' volume obviously slows down the entire operation. It would be much better if we could "open" a stream that we can read from the 'collapsed' form of that chain and stream it directly into the appropriate tar file entry, without extra writes to the storage device. Few months ago people from the oVirt-storage team checked the qemu toolset and replied that this capability is not yet provided, therefore we implemented the workaround described above. Apparently, the desired ability can also be useful for the flow discussed in this thread so it worth asking for it again :)
Kevin
Nir
Kindly awaiting your reply.
— — — Met vriendelijke groet / Best regards,
Marko Vrgotic Sr. System Engineer ActiveVideo
Tel. +31 (0)35 677 4131 <+31%2035%20677%204131> email: m.vrgotic@activevideo.com skype: av.mvrgotic.se www.activevideo.com ------------------------------ *From:* Nir Soffer <nsoffer@redhat.com> *Sent:* Thursday, May 24, 2018 4:09:40 PM *To:* Vrgotic, Marko *Cc:* users@ovirt.org; Richard W.M. Jones; Arik Hadas *Subject:* Re: [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack
On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <
M.Vrgotic@activevideo.com>
wrote:
Dear oVirt team,
When trying to start imported VM, it fails with following message:
ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddd a4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome. lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/ 816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67- 4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
Platform details:
Ovirt SHE
Version 4.2.2.6-1.el7.centos
GlusterFS, unmanaged by oVirt.
VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address):
2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e
2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5
2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f- 4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109
2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109
2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll. exportimport. ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions.
2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS
2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll. exportimport. ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory- engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[ instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core. vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory- commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b- be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827- b5c8-42a8ba496089'}), log id: 53408517
2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673
2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll. UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[ instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}'
2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS
Than trying to start VM fails with following messages:
2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06).
2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down'
2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/ 2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome. lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/ 816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67- 4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core. vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment
2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06.
2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal. dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz).
Checking on the Gluster volume, directory and files exist, permissions are in order:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]
-rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869- 0e20d7553aba
-rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869- 0e20d7553aba.lease
-rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869- 0e20d7553aba.meta
Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba
image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba
file format: qcow2
virtual size: 160G (171798691840 bytes)
disk size: 14G
cluster_size: 65536
*backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddd a4da8c0bce*
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt?
Sure this qcow2 file will not work in oVirt.
I wonder how you did the import?
Nir

Am 28.05.2018 um 12:27 hat Arik Hadas geschrieben:
On Mon, May 28, 2018 at 11:25 AM, Kevin Wolf <kwolf@redhat.com> wrote:
[ Adding qemu-block ]
On Thu, May 24, 2018 at 6:13 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko < M.Vrgotic@activevideo.com> wrote:
Dear Nir,
Thank you for quick reply.
Ok, why it will not work?
Because the image has a backing file which is not accessible to oVirt.
I used qemu+tcp connection, via import method through engine admin UI.
Images was imported and converted according logs, still “backing file” invalid entry remained.
Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file.
Image format is qcow(2) which is supported by oVirt.
What am I missing? Should I use different method?
I guess this is not a problem on your side, but a bug in our side.
Either we should block the operation that cannot work, or fix the
so we don't refer to non-existing image.
When importing we have 2 options:
- import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image.
- import the current state of the image into a new image, using either raw or qcow2, but without any backing file.
Arik, do you know why we create qcow2 file with invalid backing file?
It seems to be a result of a bit naive behavior of the kvm2ovirt module that tries to download only the top-level volume the VM uses, assuming each of the disks to be imported is comprised of a single volume.
Maybe it's time to finally asking QEMU guys to provide a way to consume
Am 27.05.2018 um 10:36 hat Arik Hadas geschrieben: process the
'collapsed' form of a chain of volumes as a stream if that's not available yet? ;) It can also boost the recently added process of exporting VMs as OVAs...
Not sure which operation we're talking about on the QEMU level, but generally the "collapsed" view is the normal thing because that's what guests see.
For example, if you use 'qemu-img convert', you have to pass options to specifically disable it and convert only a single layer if you want to keep using backing files instead of getting a standalone image that contains everything.
Yeah, some context was missing. Sorry about that.
Let me demonstrate briefly the flow for OVA: Let's say that we have a VM that is based on a template and has one disk and one snapshot, so its volume-chain would be: T -> S -> V (V is the volume the VM writes to, S is the backing file of V and T is the backing file of S). When exporting that VM to an OVA file we want the produced tar file to be comprised of: (1) OVF configuration (2) single disk volume (preferably qcow).
So we need to collapse T, S, V into a single volume. Sure, we can do 'qemu-img convert'. That's what we do now in oVirt 4.2: (a) qemu-img convert produces a 'temporary' collapsed volume (b) make a tar file of the OVf configuration and that 'temporary' volume (c) delete the temporary volume
But the fact that we produce that 'temporary' volume obviously slows down the entire operation. It would be much better if we could "open" a stream that we can read from the 'collapsed' form of that chain and stream it directly into the appropriate tar file entry, without extra writes to the storage device.
Few months ago people from the oVirt-storage team checked the qemu toolset and replied that this capability is not yet provided, therefore we implemented the workaround described above. Apparently, the desired ability can also be useful for the flow discussed in this thread so it worth asking for it again :)
I think real streaming is unlikely to happen because most image formats that QEMU supports aren't made that way. If there is a compelling reason, we can consider it, but it would work only with very few target formats and as such would have to be separate from existing commands. As for OVA files, I think it might be useful to have a tar block driver instead which would allow you to open a file inside a tar archive (you could then also directly run an OVA without extracting it first). We probably wouldn't be able to support resizing images there, but that should be okay. If you can create a tar file that reserves space for the image file without actually writing it, a possible workaround today would be using the offset/size runtime options of the raw driver to convert directly into a region inside the tar archive. Kevin

On Mon, 28 May 2018 13:37:59 +0200 Kevin Wolf <kwolf@redhat.com> wrote:
Am 28.05.2018 um 12:27 hat Arik Hadas geschrieben:
On Mon, May 28, 2018 at 11:25 AM, Kevin Wolf <kwolf@redhat.com> wrote:
[ Adding qemu-block ]
On Thu, May 24, 2018 at 6:13 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko < M.Vrgotic@activevideo.com> wrote:
Dear Nir,
Thank you for quick reply.
Ok, why it will not work?
Because the image has a backing file which is not accessible to oVirt.
I used qemu+tcp connection, via import method through engine admin UI.
Images was imported and converted according logs, still “backing file” invalid entry remained.
Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file.
Image format is qcow(2) which is supported by oVirt.
What am I missing? Should I use different method?
I guess this is not a problem on your side, but a bug in our side.
Either we should block the operation that cannot work, or fix the
so we don't refer to non-existing image.
When importing we have 2 options:
- import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image.
- import the current state of the image into a new image, using either raw or qcow2, but without any backing file.
Arik, do you know why we create qcow2 file with invalid backing file?
It seems to be a result of a bit naive behavior of the kvm2ovirt module that tries to download only the top-level volume the VM uses, assuming each of the disks to be imported is comprised of a single volume.
Maybe it's time to finally asking QEMU guys to provide a way to consume
Am 27.05.2018 um 10:36 hat Arik Hadas geschrieben: process the
'collapsed' form of a chain of volumes as a stream if that's not available yet? ;) It can also boost the recently added process of exporting VMs as OVAs...
Not sure which operation we're talking about on the QEMU level, but generally the "collapsed" view is the normal thing because that's what guests see.
For example, if you use 'qemu-img convert', you have to pass options to specifically disable it and convert only a single layer if you want to keep using backing files instead of getting a standalone image that contains everything.
Yeah, some context was missing. Sorry about that.
Let me demonstrate briefly the flow for OVA: Let's say that we have a VM that is based on a template and has one disk and one snapshot, so its volume-chain would be: T -> S -> V (V is the volume the VM writes to, S is the backing file of V and T is the backing file of S). When exporting that VM to an OVA file we want the produced tar file to be comprised of: (1) OVF configuration (2) single disk volume (preferably qcow).
So we need to collapse T, S, V into a single volume. Sure, we can do 'qemu-img convert'. That's what we do now in oVirt 4.2: (a) qemu-img convert produces a 'temporary' collapsed volume (b) make a tar file of the OVf configuration and that 'temporary' volume (c) delete the temporary volume
But the fact that we produce that 'temporary' volume obviously slows down the entire operation. It would be much better if we could "open" a stream that we can read from the 'collapsed' form of that chain and stream it directly into the appropriate tar file entry, without extra writes to the storage device.
Few months ago people from the oVirt-storage team checked the qemu toolset and replied that this capability is not yet provided, therefore we implemented the workaround described above. Apparently, the desired ability can also be useful for the flow discussed in this thread so it worth asking for it again :)
I think real streaming is unlikely to happen because most image formats that QEMU supports aren't made that way. If there is a compelling reason, we can consider it, but it would work only with very few target formats and as such would have to be separate from existing commands.
As for OVA files, I think it might be useful to have a tar block driver instead which would allow you to open a file inside a tar archive (you could then also directly run an OVA without extracting it first). We probably wouldn't be able to support resizing images there, but that should be okay.
That's something you can do with offset/size options too. In fact, that was main reason for adding it so that virt-v2v can convert OVAs without extracting them. Having a layer that understands tar format and would supply offset/size for you would be neat.
If you can create a tar file that reserves space for the image file without actually writing it, a possible workaround today would be using the offset/size runtime options of the raw driver to convert directly into a region inside the tar archive.
Not easy to do for qcow2 where you don't know how much space you will actually need. Tomas
Kevin _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EWEKAIIDE3SYAZ...
-- Tomáš Golembiovský <tgolembi@redhat.com>

Am 28.05.2018 um 16:06 hat Tomáš Golembiovský geschrieben:
On Mon, 28 May 2018 13:37:59 +0200 Kevin Wolf <kwolf@redhat.com> wrote:
Am 28.05.2018 um 12:27 hat Arik Hadas geschrieben:
On Mon, May 28, 2018 at 11:25 AM, Kevin Wolf <kwolf@redhat.com> wrote:
[ Adding qemu-block ]
On Thu, May 24, 2018 at 6:13 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko < M.Vrgotic@activevideo.com> wrote:
> Dear Nir, > > Thank you for quick reply. > > Ok, why it will not work? >
Because the image has a backing file which is not accessible to oVirt.
> I used qemu+tcp connection, via import method through engine admin UI. > > Images was imported and converted according logs, still “backing file” > invalid entry remained. > > Also, I did use same method before, connecting to plain “libvirt kvm” > host, import and conversion went smooth, no backend file. > > Image format is qcow(2) which is supported by oVirt. > > What am I missing? Should I use different method? >
I guess this is not a problem on your side, but a bug in our side.
Either we should block the operation that cannot work, or fix the
so we don't refer to non-existing image.
When importing we have 2 options:
- import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image.
- import the current state of the image into a new image, using either raw or qcow2, but without any backing file.
Arik, do you know why we create qcow2 file with invalid backing file?
It seems to be a result of a bit naive behavior of the kvm2ovirt module that tries to download only the top-level volume the VM uses, assuming each of the disks to be imported is comprised of a single volume.
Maybe it's time to finally asking QEMU guys to provide a way to consume
Am 27.05.2018 um 10:36 hat Arik Hadas geschrieben: process the
'collapsed' form of a chain of volumes as a stream if that's not available yet? ;) It can also boost the recently added process of exporting VMs as OVAs...
Not sure which operation we're talking about on the QEMU level, but generally the "collapsed" view is the normal thing because that's what guests see.
For example, if you use 'qemu-img convert', you have to pass options to specifically disable it and convert only a single layer if you want to keep using backing files instead of getting a standalone image that contains everything.
Yeah, some context was missing. Sorry about that.
Let me demonstrate briefly the flow for OVA: Let's say that we have a VM that is based on a template and has one disk and one snapshot, so its volume-chain would be: T -> S -> V (V is the volume the VM writes to, S is the backing file of V and T is the backing file of S). When exporting that VM to an OVA file we want the produced tar file to be comprised of: (1) OVF configuration (2) single disk volume (preferably qcow).
So we need to collapse T, S, V into a single volume. Sure, we can do 'qemu-img convert'. That's what we do now in oVirt 4.2: (a) qemu-img convert produces a 'temporary' collapsed volume (b) make a tar file of the OVf configuration and that 'temporary' volume (c) delete the temporary volume
But the fact that we produce that 'temporary' volume obviously slows down the entire operation. It would be much better if we could "open" a stream that we can read from the 'collapsed' form of that chain and stream it directly into the appropriate tar file entry, without extra writes to the storage device.
Few months ago people from the oVirt-storage team checked the qemu toolset and replied that this capability is not yet provided, therefore we implemented the workaround described above. Apparently, the desired ability can also be useful for the flow discussed in this thread so it worth asking for it again :)
I think real streaming is unlikely to happen because most image formats that QEMU supports aren't made that way. If there is a compelling reason, we can consider it, but it would work only with very few target formats and as such would have to be separate from existing commands.
As for OVA files, I think it might be useful to have a tar block driver instead which would allow you to open a file inside a tar archive (you could then also directly run an OVA without extracting it first). We probably wouldn't be able to support resizing images there, but that should be okay.
That's something you can do with offset/size options too. In fact, that was main reason for adding it so that virt-v2v can convert OVAs without extracting them.
Having a layer that understands tar format and would supply offset/size for you would be neat.
If you can create a tar file that reserves space for the image file without actually writing it, a possible workaround today would be using the offset/size runtime options of the raw driver to convert directly into a region inside the tar archive.
Not easy to do for qcow2 where you don't know how much space you will actually need.
Shouldn't 'qemu-img measure' solve that problem these days? Kevin

On Mon, May 28, 2018 at 2:38 PM Kevin Wolf <kwolf@redhat.com> wrote:
On Mon, May 28, 2018 at 11:25 AM, Kevin Wolf <kwolf@redhat.com> wrote:
[ Adding qemu-block ]
On Thu, May 24, 2018 at 6:13 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko < M.Vrgotic@activevideo.com> wrote:
Dear Nir,
Thank you for quick reply.
Ok, why it will not work?
Because the image has a backing file which is not accessible to oVirt.
I used qemu+tcp connection, via import method through engine admin UI.
Images was imported and converted according logs, still “backing file” invalid entry remained.
Also, I did use same method before, connecting to plain “libvirt kvm” host, import and conversion went smooth, no backend file.
Image format is qcow(2) which is supported by oVirt.
What am I missing? Should I use different method?
I guess this is not a problem on your side, but a bug in our side.
Either we should block the operation that cannot work, or fix the
so we don't refer to non-existing image.
When importing we have 2 options:
- import the entire chain, importing all images in the chain, converting each image to oVirt volume, and updating the backing file of each layer to point to the oVirt image.
- import the current state of the image into a new image, using either raw or qcow2, but without any backing file.
Arik, do you know why we create qcow2 file with invalid backing file?
It seems to be a result of a bit naive behavior of the kvm2ovirt module that tries to download only the top-level volume the VM uses, assuming each of the disks to be imported is comprised of a single volume.
Maybe it's time to finally asking QEMU guys to provide a way to consume
Am 27.05.2018 um 10:36 hat Arik Hadas geschrieben: process the
'collapsed' form of a chain of volumes as a stream if that's not available yet? ;) It can also boost the recently added process of exporting VMs as OVAs...
Not sure which operation we're talking about on the QEMU level, but generally the "collapsed" view is the normal thing because that's what guests see.
For example, if you use 'qemu-img convert', you have to pass options to specifically disable it and convert only a single layer if you want to keep using backing files instead of getting a standalone image that contains everything.
Yeah, some context was missing. Sorry about that.
Let me demonstrate briefly the flow for OVA: Let's say that we have a VM that is based on a template and has one disk and one snapshot, so its volume-chain would be: T -> S -> V (V is the volume the VM writes to, S is the backing file of V and T is
Am 28.05.2018 um 12:27 hat Arik Hadas geschrieben: the
backing file of S). When exporting that VM to an OVA file we want the produced tar file to be comprised of: (1) OVF configuration (2) single disk volume (preferably qcow).
So we need to collapse T, S, V into a single volume. Sure, we can do 'qemu-img convert'. That's what we do now in oVirt 4.2: (a) qemu-img convert produces a 'temporary' collapsed volume (b) make a tar file of the OVf configuration and that 'temporary' volume (c) delete the temporary volume
But the fact that we produce that 'temporary' volume obviously slows down the entire operation. It would be much better if we could "open" a stream that we can read from the 'collapsed' form of that chain and stream it directly into the appropriate tar file entry, without extra writes to the storage device.
Few months ago people from the oVirt-storage team checked the qemu toolset and replied that this capability is not yet provided, therefore we implemented the workaround described above. Apparently, the desired ability can also be useful for the flow discussed in this thread so it worth asking for it again :)
I think real streaming is unlikely to happen because most image formats that QEMU supports aren't made that way. If there is a compelling reason, we can consider it, but it would work only with very few target formats and as such would have to be separate from existing commands.
Real streaming is exactly what we want, and we need it only for qcow2 format, because it is our preferred way to pack images in ova. We have 2 possible use cases: Exporting images or ova files: image in any format -> qemu-img -> [qcow2 byte stream] -> imageio http server -> http client Importing images or ova files: http client -> imageio http server -> [qcow2 byte stream] -> qemu-img -> image in any format If we will have this, we don't need to create temporary storage space and we can avoid several image copies in the process. This will also improve the user experience, avoiding the wait until a ova is created before the user can start downloading it. As for OVA files, I think it might be useful to have a tar block driver
instead which would allow you to open a file inside a tar archive (you could then also directly run an OVA without extracting it first). We probably wouldn't be able to support resizing images there, but that should be okay.
If you can create a tar file that reserves space for the image file without actually writing it, a possible workaround today would be using the offset/size runtime options of the raw driver to convert directly into a region inside the tar archive.
What are the offset/size runtime options? I cannot find anything about them in man qemu-img. Nir

On 05/29/2018 04:11 PM, Nir Soffer wrote:
I think real streaming is unlikely to happen because most image formats that QEMU supports aren't made that way. If there is a compelling reason, we can consider it, but it would work only with very few target formats and as such would have to be separate from existing commands.
Real streaming is exactly what we want, and we need it only for qcow2 format, because it is our preferred way to pack images in ova.
We have 2 possible use cases:
Exporting images or ova files:
image in any format -> qemu-img -> [qcow2 byte stream] -> imageio http server -> http client
image in any format -> qemu-img measure (to learn how large to size qcow2) then create destination qcow2 file that large and serve it over NBD as raw (perhaps using an nbdkit plugin for this part) then qemu-img convert to destination format qcow2 as NBD client So, as long as your NBD server (via nbdkit plugin) can talk to imageio http server -> http client, and sized things properly according to qemu-img measure, then qemu-img can write qcow2 (rather than it's more usual raw) over the NBD connection, and when the process is complete, the http client will have a fully-populated qcow2 file with no temporary files created in the meantime.
Importing images or ova files:
http client -> imageio http server -> [qcow2 byte stream] -> qemu-img -> image in any format
Same sort of thing - as long as the NBD server is serving a qcow2 file as raw data, then the NBD client is interpreting that data as qcow2, then qemu-img convert should be able to convert that qcow2 stream into any format. Or, put another way, usually you do the conversion from qcow2 to raw at the server, then the client sees raw bytes: qemu-nbd -f qcow2 file.qcow2 # expose only the guest-visible bytes... qemu-img convert -f raw nbd://host output # and write those bytes but in this case, you'd be serving raw bytes at the server, and letting the client do qcow2 conversion: qemu-nbd -f raw file.qcow2 # expose the full qcow2 file... qemu-img convert -f qcow2 nbd://host output # and extract the guest view where using nbdkit instead of qemu-nbd as your point of contact with the imageio http server may make more sense.
If we will have this, we don't need to create temporary storage space and we can avoid several image copies in the process.
This will also improve the user experience, avoiding the wait until a ova is created before the user can start downloading it.
As for OVA files, I think it might be useful to have a tar block driver
instead which would allow you to open a file inside a tar archive (you could then also directly run an OVA without extracting it first). We probably wouldn't be able to support resizing images there, but that should be okay.
If you can create a tar file that reserves space for the image file without actually writing it, a possible workaround today would be using the offset/size runtime options of the raw driver to convert directly into a region inside the tar archive.
What are the offset/size runtime options? I cannot find anything about them in man qemu-img.
## # @BlockdevOptionsRaw: # # Driver specific block device options for the raw driver. # # @offset: position where the block device starts # @size: the assumed size of the device # # Since: 2.9 ## { 'struct': 'BlockdevOptionsRaw', 'base': 'BlockdevOptionsGenericFormat', 'data': { '*offset': 'int', '*size': 'int' } } Yeah, it's a pity that "qemu-img create -o help -f raw" has forgotten to document them, the way "qemu-img create -o help -f qcow2" does for its options, so we should fix that. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org

On Wed, May 30, 2018 at 12:35 AM Eric Blake <eblake@redhat.com> wrote:
On 05/29/2018 04:11 PM, Nir Soffer wrote:
I think real streaming is unlikely to happen because most image formats that QEMU supports aren't made that way. If there is a compelling reason, we can consider it, but it would work only with very few target formats and as such would have to be separate from existing commands.
Real streaming is exactly what we want, and we need it only for qcow2 format, because it is our preferred way to pack images in ova.
We have 2 possible use cases:
Exporting images or ova files:
image in any format -> qemu-img -> [qcow2 byte stream] -> imageio http server -> http client
image in any format -> qemu-img measure (to learn how large to size qcow2) then create destination qcow2 file that large
Isn't this a temporary qcow2 file we want to avoid?
and serve it over NBD as raw (perhaps using an nbdkit plugin for this part) then qemu-img convert to destination format qcow2 as NBD client
So, as long as your NBD server (via nbdkit plugin) can talk to imageio http server -> http client, and sized things properly according to qemu-img measure, then qemu-img can write qcow2 (rather than it's more usual raw) over the NBD connection, and when the process is complete, the http client will have a fully-populated qcow2 file with no temporary files created in the meantime.
Ok, it may work. What I need is: qemu-img converting image to qcow2 format to a nbd server nbd server writing qcow2 stream to stdout imageio running both, sending data from nbd server stdout to client But this means the nbd server cannot handle flow like: - zero entire disk - write data block 1 at offset x - write data block 2 at offset y We have seen this in virt-v2v nbdkit plugin on the server side using raw output format, maybe this is not an issue with qcow2 output format? qemu-img must know that the transport cannot seek. When I tried in the past to do something like this it did not work, but maybe I was missing some options. I think this should be implemented as a transport driver, like curl/ driver, instead of creating these complex pipelines.
Importing images or ova files:
http client -> imageio http server -> [qcow2 byte stream] -> qemu-img -> image in any format
Same sort of thing - as long as the NBD server is serving a qcow2 file as raw data, then the NBD client is interpreting that data as qcow2, then qemu-img convert should be able to convert that qcow2 stream into any format.
Or, put another way, usually you do the conversion from qcow2 to raw at the server, then the client sees raw bytes:
qemu-nbd -f qcow2 file.qcow2 # expose only the guest-visible bytes...
qemu-img convert -f raw nbd://host output # and write those bytes
but in this case, you'd be serving raw bytes at the server, and letting the client do qcow2 conversion:
qemu-nbd -f raw file.qcow2 # expose the full qcow2 file... qemu-img convert -f qcow2 nbd://host output # and extract the guest view
where using nbdkit instead of qemu-nbd as your point of contact with the imageio http server may make more sense.
If we will have this, we don't need to create temporary storage space
and we
can avoid several image copies in the process.
This will also improve the user experience, avoiding the wait until a ova is created before the user can start downloading it.
As for OVA files, I think it might be useful to have a tar block driver
instead which would allow you to open a file inside a tar archive (you could then also directly run an OVA without extracting it first). We probably wouldn't be able to support resizing images there, but that should be okay.
If you can create a tar file that reserves space for the image file without actually writing it, a possible workaround today would be using the offset/size runtime options of the raw driver to convert directly into a region inside the tar archive.
What are the offset/size runtime options? I cannot find anything about them in man qemu-img.
## # @BlockdevOptionsRaw: # # Driver specific block device options for the raw driver. # # @offset: position where the block device starts # @size: the assumed size of the device # # Since: 2.9 ## { 'struct': 'BlockdevOptionsRaw', 'base': 'BlockdevOptionsGenericFormat', 'data': { '*offset': 'int', '*size': 'int' } }
Yeah, it's a pity that "qemu-img create -o help -f raw" has forgotten to document them, the way "qemu-img create -o help -f qcow2" does for its options, so we should fix that.
Thanks for the hint, but I still don't understand for which command these options can be used, and how. Can you show an example of using the options? Nir

On Wed, May 30, 2018 at 12:11:21AM +0300, Nir Soffer wrote:
Exporting images or ova files:
image in any format -> qemu-img -> [qcow2 byte stream] -> imageio http server -> http client
You can do this with nbdkit + plugin, it's exactly what we do today for virt-v2v: https://github.com/libguestfs/libguestfs/blob/master/v2v/rhv-upload-plugin.p...
Importing images or ova files:
http client -> imageio http server -> [qcow2 byte stream] -> qemu-img -> image in any format
Also could be done with nbdkit + plugin, basically the reverse of the above.
If you can create a tar file that reserves space for the image file without actually writing it, a possible workaround today would be using the offset/size runtime options of the raw driver to convert directly into a region inside the tar archive.
What are the offset/size runtime options? I cannot find anything about them in man qemu-img.
See: https://github.com/libguestfs/libguestfs/blob/dd162d2cd56a2ecf4bcd40a7f46394... But in any case you can just use the nbdkit tar plugin which already does all of this. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://people.redhat.com/~rjones/virt-top

On Wed, May 30, 2018 at 11:58 AM Richard W.M. Jones <rjones@redhat.com> wrote:
On Wed, May 30, 2018 at 12:11:21AM +0300, Nir Soffer wrote:
Exporting images or ova files:
image in any format -> qemu-img -> [qcow2 byte stream] -> imageio http server -> http client
You can do this with nbdkit + plugin, it's exactly what we do today for virt-v2v:
https://github.com/libguestfs/libguestfs/blob/master/v2v/rhv-upload-plugin.p...
This is not the flow we are looking for. We need a way to read qcow2 data from a pipe.
Importing images or ova files:
http client -> imageio http server -> [qcow2 byte stream] -> qemu-img -> image in any format
Also could be done with nbdkit + plugin, basically the reverse of the above.
If you can create a tar file that reserves space for the image file without actually writing it, a possible workaround today would be using the offset/size runtime options of the raw driver to convert directly into a region inside the tar archive.
What are the offset/size runtime options? I cannot find anything about them in man qemu-img.
See:
https://github.com/libguestfs/libguestfs/blob/dd162d2cd56a2ecf4bcd40a7f46394...
But in any case you can just use the nbdkit tar plugin which already does all of this.
Can it work with a tar stream read from stdin, or it requires a tar file? Nir

On Wed, May 30, 2018 at 03:35:11PM +0300, Nir Soffer wrote:
This is not the flow we are looking for. We need a way to read qcow2 data from a pipe.
The flow you asked for:
image in any format -> qemu-img -> [qcow2 byte stream] -> imageio http server -> http client
is exactly what rhv-upload-plugin.py does, except for "-> http client" at the end which I don't understand. Can you describe exactly what you're trying to do again? [...]
But in any case you can just use the nbdkit tar plugin which already does all of this.
Can it work with a tar stream read from stdin, or it requires a tar file?
As above it may help to describe from the start exactly what you're trying to do. This email thread has gone on for days and it's hard to keep track of everything. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html

On 05/30/2018 07:35 AM, Nir Soffer wrote:
This is not the flow we are looking for. We need a way to read qcow2 data from a pipe.
Why? The qcow2 format inherently requires seeking (or a HUGE amount of free RAM) the moment you want to interpret the data as qcow2. It can't be piped when being produced or consumed for reading guest contents (it can be piped as a read-only format if you aren't inspecting guest contents, but then again so can raw). If you are asking how to connect a SEEKABLE network connection, so that you don't need temporary storage locally, then NBD is a great format (it is network friendly, where the local qemu-img process is using the remote server for all storage; no local storage required). But even then, it still requires a seekable input if you are not visiting the file in byte order, and qcow2 as an interpreted format cannot be written or read in byte order (for example, it inherently requires dereferencing through header, L1, L2, and refcount tables, which implies seeking).
But in any case you can just use the nbdkit tar plugin which already does all of this.
Can it work with a tar stream read from stdin, or it requires a tar file?
nbdkit includes a plugin for creating a seekable layer on top of a pipe, at the expense of a huge memory cost (you have to have as much RAM available as you would ever have to seek backwards). It also makes it easy to have a plugin for a tar file (reading tar files is easy; writing is a bit harder, but should work as long as you don't need resize and don't have any compressed sparse regions that need to be rewritten to non-sparse data) -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org

On Mon, May 28, 2018 at 01:27:21PM +0300, Arik Hadas wrote:
Let me demonstrate briefly the flow for OVA: Let's say that we have a VM that is based on a template and has one disk and one snapshot, so its volume-chain would be: T -> S -> V (V is the volume the VM writes to, S is the backing file of V and T is the backing file of S). When exporting that VM to an OVA file we want the produced tar file to be comprised of: (1) OVF configuration (2) single disk volume (preferably qcow).
So we need to collapse T, S, V into a single volume. Sure, we can do 'qemu-img convert'. That's what we do now in oVirt 4.2: (a) qemu-img convert produces a 'temporary' collapsed volume (b) make a tar file of the OVf configuration and that 'temporary' volume (c) delete the temporary volume
But the fact that we produce that 'temporary' volume obviously slows down the entire operation. It would be much better if we could "open" a stream that we can read from the 'collapsed' form of that chain and stream it directly into the appropriate tar file entry, without extra writes to the storage device.
A custom nbdkit plugin is possible here. In fact it's almost possible using the existing nbdkit-tar-plugin[1], except that it doesn't support resizing the tarball so you'd need a way to predict the size of the final qcow2 file. The main difficulty for modifying nbdkit-tar-plugin is working out how to resize tar files. If you can do that then it's likely just a few lines of code. Rich. [1] https://manpages.debian.org/testing/nbdkit-plugin-perl/nbdkit-tar-plugin.1.e... https://github.com/libguestfs/nbdkit/blob/master/plugins/tar/tar.pl -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-df lists disk usage of guests without needing to install any software inside the virtual machine. Supports Linux and Windows. http://people.redhat.com/~rjones/virt-df/

Am 29.05.2018 um 11:27 hat Richard W.M. Jones geschrieben:
On Mon, May 28, 2018 at 01:27:21PM +0300, Arik Hadas wrote:
Let me demonstrate briefly the flow for OVA: Let's say that we have a VM that is based on a template and has one disk and one snapshot, so its volume-chain would be: T -> S -> V (V is the volume the VM writes to, S is the backing file of V and T is the backing file of S). When exporting that VM to an OVA file we want the produced tar file to be comprised of: (1) OVF configuration (2) single disk volume (preferably qcow).
So we need to collapse T, S, V into a single volume. Sure, we can do 'qemu-img convert'. That's what we do now in oVirt 4.2: (a) qemu-img convert produces a 'temporary' collapsed volume (b) make a tar file of the OVf configuration and that 'temporary' volume (c) delete the temporary volume
But the fact that we produce that 'temporary' volume obviously slows down the entire operation. It would be much better if we could "open" a stream that we can read from the 'collapsed' form of that chain and stream it directly into the appropriate tar file entry, without extra writes to the storage device.
A custom nbdkit plugin is possible here. In fact it's almost possible using the existing nbdkit-tar-plugin[1], except that it doesn't support resizing the tarball so you'd need a way to predict the size of the final qcow2 file.
I think you can predict the size with 'qemu-img measure'. But how do you create a tar archive that contains an empty file of the right size without actually processing and writing gigabytes of zero bytes? Is there an existing tool that can do that or would you have to write your own?
The main difficulty for modifying nbdkit-tar-plugin is working out how to resize tar files. If you can do that then it's likely just a few lines of code.
This sounds impossible to do when the tar archive needs to stay consistent at all times. Kevin

On 05/28/2018 05:27 AM, Arik Hadas wrote: [Answering before reading the entire thread; apologies if I'm repeating things, or if I have to chime in again at other spots]
Let me demonstrate briefly the flow for OVA: Let's say that we have a VM that is based on a template and has one disk and one snapshot, so its volume-chain would be: T -> S -> V (V is the volume the VM writes to, S is the backing file of V and T is the backing file of S).
I tend to write backing relationships as a left arrow, as in: T <- S <- V (can be read as: S depends on T, and V depends on S)
When exporting that VM to an OVA file we want the produced tar file to be comprised of: (1) OVF configuration (2) single disk volume (preferably qcow).
So we need to collapse T, S, V into a single volume. Sure, we can do 'qemu-img convert'. That's what we do now in oVirt 4.2: (a) qemu-img convert produces a 'temporary' collapsed volume (b) make a tar file of the OVf configuration and that 'temporary' volume (c) delete the temporary volume
But the fact that we produce that 'temporary' volume obviously slows down the entire operation. It would be much better if we could "open" a stream that we can read from the 'collapsed' form of that chain and stream it directly into the appropriate tar file entry, without extra writes to the storage device.
Few months ago people from the oVirt-storage team checked the qemu toolset and replied that this capability is not yet provided, therefore we implemented the workaround described above. Apparently, the desired ability can also be useful for the flow discussed in this thread so it worth asking for it again :)
You CAN get a logically collapsed view of storage (that is, what the guest would see), by using an NBD export of volume V. Reading from that volume will then pull sectors from whichever portion of the chain you need. You can use either qemu-nbd (if no guest is writing to the chain), or within a running qemu, you can use nbd-server-start and nbd-server-add (over QMP) to get such an NBD server running. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org

On Tue, May 29, 2018 at 10:43 PM Eric Blake <eblake@redhat.com> wrote:
On 05/28/2018 05:27 AM, Arik Hadas wrote:
...
Few months ago people from the oVirt-storage team checked the qemu toolset and replied that this capability is not yet provided, therefore we implemented the workaround described above. Apparently, the desired ability can also be useful for the flow discussed in this thread so it worth asking for it again :)
You CAN get a logically collapsed view of storage (that is, what the guest would see), by using an NBD export of volume V. Reading from that volume will then pull sectors from whichever portion of the chain you need. You can use either qemu-nbd (if no guest is writing to the chain), or within a running qemu, you can use nbd-server-start and nbd-server-add (over QMP) to get such an NBD server running.
NBD expose the guest data, but we want the qcow2 stream - without creating a new image. Nir

On 05/29/2018 04:18 PM, Nir Soffer wrote:
You CAN get a logically collapsed view of storage (that is, what the guest would see), by using an NBD export of volume V. Reading from that volume will then pull sectors from whichever portion of the chain you need. You can use either qemu-nbd (if no guest is writing to the chain), or within a running qemu, you can use nbd-server-start and nbd-server-add (over QMP) to get such an NBD server running.
NBD expose the guest data, but we want the qcow2 stream - without creating a new image.
NBD can do both. You choose whether it exposes the guest data or the qcow2 data, by whether the client or the server is interpreting qcow2 data. Visually, if everything is local, qemu normally needs only two block layer entries: qcow2 format layer => file protocol layer But you can also make qemu use four block layer entries, since the raw layer is a normally passthrough layer (unless you are also using it for it's ability to support an offset within a larger file, such as reading from a tar file): raw format layer => qcow2 format layer => raw format layer => file protocol layer Then when you introduce NBD into the picture, you have the choice of WHERE in the four-layer system. The usual choice is: NBD server using -f qcow2, client using -f raw: raw format => NBD client protocol => (raw bytes) => NBD server => qcow2 format => raw format => file protocol (simplified to raw format => NBD client protocol => (raw bytes) => NBD server => qcow2 format => file protocol) But an alternative choice is: NBD server using -f raw, client using -f qcow2: raw format => qcow2 format => NBD client protocol => (qcow2 bytes) => NBD server => raw format => file protocol (simplified to qcow2 format => NBD client protocol => (qcow2 bytes) => NBD server => raw format => file protocol) -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org

Am 30.05.2018 um 15:44 hat Eric Blake geschrieben:
On 05/29/2018 04:18 PM, Nir Soffer wrote:
You CAN get a logically collapsed view of storage (that is, what the guest would see), by using an NBD export of volume V. Reading from that volume will then pull sectors from whichever portion of the chain you need. You can use either qemu-nbd (if no guest is writing to the chain), or within a running qemu, you can use nbd-server-start and nbd-server-add (over QMP) to get such an NBD server running.
NBD expose the guest data, but we want the qcow2 stream - without creating a new image.
NBD can do both. You choose whether it exposes the guest data or the qcow2 data, by whether the client or the server is interpreting qcow2 data.
But if I understand correctly, it doesn't result in the image Nir wants. You would only export an existing qcow2 file, i.e. a single layer in the backing chain, this way. The question was about a collapsed image, i.e. the disk content as the guest sees it. The problem is that qcow2 just isn't made to be streamable. Importing a qcow2 stream without saving it into a temporary file (or a memory buffer as large as the image file) simply isn't possible in the general case. Exporting to a stream is possible if we're allowed to make two passes over the source, but the existing QEMU code is useless for that because it inherently requires seeking. I think if I had to get something like this, I'd probably implement such an exporter as a script external to QEMU. Kevin

On 05/30/2018 09:16 AM, Kevin Wolf wrote:
Am 30.05.2018 um 15:44 hat Eric Blake geschrieben:
On 05/29/2018 04:18 PM, Nir Soffer wrote:
You CAN get a logically collapsed view of storage (that is, what the guest would see), by using an NBD export of volume V. Reading from that volume will then pull sectors from whichever portion of the chain you need. You can use either qemu-nbd (if no guest is writing to the chain), or within a running qemu, you can use nbd-server-start and nbd-server-add (over QMP) to get such an NBD server running.
NBD expose the guest data, but we want the qcow2 stream - without creating a new image.
NBD can do both. You choose whether it exposes the guest data or the qcow2 data, by whether the client or the server is interpreting qcow2 data.
But if I understand correctly, it doesn't result in the image Nir wants. You would only export an existing qcow2 file, i.e. a single layer in the backing chain, this way. The question was about a collapsed image, i.e. the disk content as the guest sees it.
The problem is that qcow2 just isn't made to be streamable. Importing a qcow2 stream without saving it into a temporary file (or a memory buffer as large as the image file) simply isn't possible in the general case.
If I understood the question, we start with a local: T (any format) <- S (qcow2) <- V (qcow2) and want to create a remote tar file: dest.tar == | header ... | qcow2 image | where we write a single collapsed view of the T<-S<-V chain as a qcow2 image in the subset of the remote tar file. So, first use qemu-img to learn how big to size the collapsed qcow2 image, and by extension, the overall tar image $ qemu-img measure -f qcow2 -O qcow2 V then pre-create a large enough tar file on the destination $ create header $ truncate --size=XXX dest.qcow2 $ tar cf dest.tar header dest.qcow2 (note that I explicitly did NOT use tar --sparse; dest.qcow2 is sparse and occupies practically no disk space, but dest.tar must NOT be sparse because neither tar nor NBD work well with after-the-fact resizing) then set up an NBD server on the destination that can write to the subset of the tar file: $ learn the offset of dest.qcow2 within dest.tar (probably a multiple of 10240, given default GNU tar options) $ qemu-nbd --image-opts driver=raw,offset=YYY,size=XXX,file.driver=file,file.filename=dest.tar (I'm not sure if I got the --image-opts syntax exactly correct. nbdkit has more examples of learning offsets within a tar file, and may be a better option as a server than qemu-nbd - but the point remains: serve up the subset of the dest.tar file as raw bytes) finally set up qemu as an NBD client on the source: $ qemu-img convert -f qcow2 V -O qcow2 nbd://remote (now the client collapses the qcow2 chain onto the source, and writes that into a qcow2 subset of the tar file on the destination, where the destination was already sized large enough to hold the qcow2 image, and where no other temporary storage was needed other than the sparse dest.qcow2 used in creating a large enough tar file)
Exporting to a stream is possible if we're allowed to make two passes over the source, but the existing QEMU code is useless for that because it inherently requires seeking. I think if I had to get something like this, I'd probably implement such an exporter as a script external to QEMU.
Wait. What are we trying to stream? A qcow2 file, or what the guest would see? If you stream just what the guest sees, then 'qemu-img map' tells you which portions of which source files to read in order to reconstruct data in the order it would be seen by the guest. But yeah, an external exporter that takes a raw file, learns its size and where the holes are, and then writes a trivial qcow2 header and appends L1/L2/refcount tables on the end to convert the raw file into a slightly-larger qcow2 file, might be a valid way to create a qcow2 file from a two-pass read. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org

Am 30.05.2018 um 17:05 hat Eric Blake geschrieben:
If I understood the question, we start with a local:
T (any format) <- S (qcow2) <- V (qcow2)
and want to create a remote tar file:
dest.tar == | header ... | qcow2 image |
where we write a single collapsed view of the T<-S<-V chain as a qcow2 image in the subset of the remote tar file.
I think the problem is that we're talking about two different things in one thread. If I understand correctly, what oVirt does today is: 1. qemu-img convert to create a temporary qcow2 image that merges the whole backing chain in a single file 2. tar to create an temporary OVA archive that contains, amongst others, the temporary qcow2 image. This is a second temporary file. 3. Stream this temporary OVA archive over HTTP Your proposal is about getting rid of the temporary file from step 1, but keeping the temporary file from step 2. I was kind of ignoring step 2 and answering how you can avoid a temporary file by creating and streaming a qcow2 file in a single step, but if you already have the code to create a qcow2 image as a stream, adding a tar header as well shouldn't be that hard... I think Nir was talking about both. Ideally, we'd somehow get rid of HTTP, which introduces the requirement of a non-seekable stream.
So, first use qemu-img to learn how big to size the collapsed qcow2 image, and by extension, the overall tar image $ qemu-img measure -f qcow2 -O qcow2 V
then pre-create a large enough tar file on the destination $ create header $ truncate --size=XXX dest.qcow2 $ tar cf dest.tar header dest.qcow2
(note that I explicitly did NOT use tar --sparse; dest.qcow2 is sparse and occupies practically no disk space, but dest.tar must NOT be sparse because neither tar nor NBD work well with after-the-fact resizing)
then set up an NBD server on the destination that can write to the subset of the tar file:
$ learn the offset of dest.qcow2 within dest.tar (probably a multiple of 10240, given default GNU tar options) $ qemu-nbd --image-opts driver=raw,offset=YYY,size=XXX,file.driver=file,file.filename=dest.tar
(I'm not sure if I got the --image-opts syntax exactly correct. nbdkit has more examples of learning offsets within a tar file, and may be a better option as a server than qemu-nbd - but the point remains: serve up the subset of the dest.tar file as raw bytes)
finally set up qemu as an NBD client on the source: $ qemu-img convert -f qcow2 V -O qcow2 nbd://remote
(now the client collapses the qcow2 chain onto the source, and writes that into a qcow2 subset of the tar file on the destination, where the destination was already sized large enough to hold the qcow2 image, and where no other temporary storage was needed other than the sparse dest.qcow2 used in creating a large enough tar file)
You added another host into the mix, which just receives the image content via NBD and then re-exports it as HTTP. Does this host actually exist or is it the same host where the original images are located? Because if you stay local for this step, there is no need to use NBD at all: $ ./qemu-img measure -O qcow2 ~/images/hd.img required size: 67436544 fully allocated size: 67436544 $ ./qemu-img create -f file /tmp/test.qcow2 67436544 Formatting '/tmp/test.qcow2', fmt=file size=67436544 $ ./qemu-img convert -n --target-image-opts ~/images/hd.img driver=raw,file.driver=file,file.filename=/tmp/test.qcow2,offset=65536 hexdump verifies that this does the expected thing.
Exporting to a stream is possible if we're allowed to make two passes over the source, but the existing QEMU code is useless for that because it inherently requires seeking. I think if I had to get something like this, I'd probably implement such an exporter as a script external to QEMU.
Wait. What are we trying to stream? A qcow2 file, or what the guest would see? If you stream just what the guest sees, then 'qemu-img map' tells you which portions of which source files to read in order to reconstruct data in the order it would be seen by the guest.
I think the requirement was that the HTTP client downloads a qcow2 image. Did I get this wrong?
But yeah, an external exporter that takes a raw file, learns its size and where the holes are, and then writes a trivial qcow2 header and appends L1/L2/refcount tables on the end to convert the raw file into a slightly-larger qcow2 file, might be a valid way to create a qcow2 file from a two-pass read.
Right. It may have to calculate the size of the L1 and refcount table first so it can write the right offsets into the header, so maybe it's easiest to precreate the whole metadata. But that's an implementation detail. Anyway, I don't think the existing QEMU code helps you with this. Kevin

On Wed, May 30, 2018 at 6:33 PM, Kevin Wolf <kwolf@redhat.com> wrote:
Am 30.05.2018 um 17:05 hat Eric Blake geschrieben:
If I understood the question, we start with a local:
T (any format) <- S (qcow2) <- V (qcow2)
and want to create a remote tar file:
dest.tar == | header ... | qcow2 image |
where we write a single collapsed view of the T<-S<-V chain as a qcow2 image in the subset of the remote tar file.
I think the problem is that we're talking about two different things in one thread. If I understand correctly, what oVirt does today is:
1. qemu-img convert to create a temporary qcow2 image that merges the whole backing chain in a single file
2. tar to create an temporary OVA archive that contains, amongst others, the temporary qcow2 image. This is a second temporary file.
3. Stream this temporary OVA archive over HTTP
Well, today we suggest users to mount a shared storage to multiple hosts that reside in different oVirt/RHV deployments so they could export VMs/templates as OVAs to that shared storage and import these OVAs from the shared storage to a destination deployment. This process involves only #1 and #2. The technique you proposed earlier for writing disks directly into an OVA, assuming that the target size can be retrieved with 'qemu-img measure', sounds like a nice approach to accelerate this process. I think we should really consider doing that if that's as easy as it sounds. But #3 is definitely something we are interested in because we expect the next step to be exporting the OVAs to a remote instance of Glance that serves as a shared repository for the different deployments. Being able to stream the collapsed form of a volume chain without writing anything to the storage device would be fantastic. I think that even at the expense of iterating the chain twice - once to map the structure of the jump tables (right?) and once to stream the whole data.
Your proposal is about getting rid of the temporary file from step 1, but keeping the temporary file from step 2. I was kind of ignoring step 2 and answering how you can avoid a temporary file by creating and streaming a qcow2 file in a single step, but if you already have the code to create a qcow2 image as a stream, adding a tar header as well shouldn't be that hard...
I think Nir was talking about both.
Ideally, we'd somehow get rid of HTTP, which introduces the requirement of a non-seekable stream.
So, first use qemu-img to learn how big to size the collapsed qcow2 image, and by extension, the overall tar image $ qemu-img measure -f qcow2 -O qcow2 V
then pre-create a large enough tar file on the destination $ create header $ truncate --size=XXX dest.qcow2 $ tar cf dest.tar header dest.qcow2
(note that I explicitly did NOT use tar --sparse; dest.qcow2 is sparse and occupies practically no disk space, but dest.tar must NOT be sparse because neither tar nor NBD work well with after-the-fact resizing)
then set up an NBD server on the destination that can write to the subset of the tar file:
$ learn the offset of dest.qcow2 within dest.tar (probably a multiple of 10240, given default GNU tar options) $ qemu-nbd --image-opts driver=raw,offset=YYY,size=XXX,file.driver=file,file.filename=dest.tar
(I'm not sure if I got the --image-opts syntax exactly correct. nbdkit has more examples of learning offsets within a tar file, and may be a better option as a server than qemu-nbd - but the point remains: serve up the subset of the dest.tar file as raw bytes)
finally set up qemu as an NBD client on the source: $ qemu-img convert -f qcow2 V -O qcow2 nbd://remote
(now the client collapses the qcow2 chain onto the source, and writes that into a qcow2 subset of the tar file on the destination, where the destination was already sized large enough to hold the qcow2 image, and where no other temporary storage was needed other than the sparse dest.qcow2 used in creating a large enough tar file)
You added another host into the mix, which just receives the image content via NBD and then re-exports it as HTTP. Does this host actually exist or is it the same host where the original images are located?
Because if you stay local for this step, there is no need to use NBD at all:
$ ./qemu-img measure -O qcow2 ~/images/hd.img required size: 67436544 fully allocated size: 67436544 $ ./qemu-img create -f file /tmp/test.qcow2 67436544 Formatting '/tmp/test.qcow2', fmt=file size=67436544 $ ./qemu-img convert -n --target-image-opts ~/images/hd.img driver=raw,file.driver=file,file.filename=/tmp/test.qcow2,offset=65536
hexdump verifies that this does the expected thing.
Exporting to a stream is possible if we're allowed to make two passes over the source, but the existing QEMU code is useless for that because it inherently requires seeking. I think if I had to get something like this, I'd probably implement such an exporter as a script external to QEMU.
Wait. What are we trying to stream? A qcow2 file, or what the guest would see? If you stream just what the guest sees, then 'qemu-img map' tells you which portions of which source files to read in order to reconstruct data in the order it would be seen by the guest.
I think the requirement was that the HTTP client downloads a qcow2 image. Did I get this wrong?
But yeah, an external exporter that takes a raw file, learns its size and where the holes are, and then writes a trivial qcow2 header and appends L1/L2/refcount tables on the end to convert the raw file into a slightly-larger qcow2 file, might be a valid way to create a qcow2 file from a two-pass read.
Right. It may have to calculate the size of the L1 and refcount table first so it can write the right offsets into the header, so maybe it's easiest to precreate the whole metadata. But that's an implementation detail.
Anyway, I don't think the existing QEMU code helps you with this.
Kevin

Am 30.05.2018 um 18:14 hat Arik Hadas geschrieben:
On Wed, May 30, 2018 at 6:33 PM, Kevin Wolf <kwolf@redhat.com> wrote:
I think the problem is that we're talking about two different things in one thread. If I understand correctly, what oVirt does today is:
1. qemu-img convert to create a temporary qcow2 image that merges the whole backing chain in a single file
2. tar to create an temporary OVA archive that contains, amongst others, the temporary qcow2 image. This is a second temporary file.
3. Stream this temporary OVA archive over HTTP
Well, today we suggest users to mount a shared storage to multiple hosts that reside in different oVirt/RHV deployments so they could export VMs/templates as OVAs to that shared storage and import these OVAs from the shared storage to a destination deployment. This process involves only #1 and #2.
The technique you proposed earlier for writing disks directly into an OVA, assuming that the target size can be retrieved with 'qemu-img measure', sounds like a nice approach to accelerate this process. I think we should really consider doing that if that's as easy as it sounds.
Writing the image to a given offset in a file is the example that I gave further down in the mail:
You added another host into the mix, which just receives the image content via NBD and then re-exports it as HTTP. Does this host actually exist or is it the same host where the original images are located?
Because if you stay local for this step, there is no need to use NBD at all:
$ ./qemu-img measure -O qcow2 ~/images/hd.img required size: 67436544 fully allocated size: 67436544 $ ./qemu-img create -f file /tmp/test.qcow2 67436544 Formatting '/tmp/test.qcow2', fmt=file size=67436544 $ ./qemu-img convert -n --target-image-opts ~/images/hd.img driver=raw,file.driver=file,file.filename=/tmp/test.qcow2,offset=65536
hexdump verifies that this does the expected thing.
But #3 is definitely something we are interested in because we expect the next step to be exporting the OVAs to a remote instance of Glance that serves as a shared repository for the different deployments. Being able to stream the collapsed form of a volume chain without writing anything to the storage device would be fantastic. I think that even at the expense of iterating the chain twice - once to map the structure of the jump tables (right?) and once to stream the whole data.
If the target is not a stupid web browser, but something actually virt-related like Glance, I'm sure it can offer a more suitable protocol than HTTP? If you could talk NBD to Glance, you'd get rid of the streaming requirement. I think it would make more sense to invest the effort there. Kevin

Better late than never - thank you all for the input, it was very useful! With the ability to measure the collapsed form of a volume chain in the qcow2 format, we managed to simplify and improve the process of creating an OVA significantly. What we do now is: 1. Measure the collapsed qcow2 volumes that would be written to the OVA 2. Create the OVA file accordingly 3. Write the "metadata" of the OVA - headers and OVF, while skipping the reserved places for disks 4. Mount each reserved place for a disk as a loopback device and convert the volume-chain directly to it [1] This saves us from allocating temporary disks and extra copies. That change would hopefully land in one of the next updates of oVirt 4.2. [1] https://github.com/oVirt/ovirt-engine/commit/23230a131af33bfa55a3fe828660c32... On Wed, May 30, 2018 at 7:39 PM Kevin Wolf <kwolf@redhat.com> wrote:
On Wed, May 30, 2018 at 6:33 PM, Kevin Wolf <kwolf@redhat.com> wrote:
I think the problem is that we're talking about two different things in one thread. If I understand correctly, what oVirt does today is:
1. qemu-img convert to create a temporary qcow2 image that merges the whole backing chain in a single file
2. tar to create an temporary OVA archive that contains, amongst others, the temporary qcow2 image. This is a second temporary file.
3. Stream this temporary OVA archive over HTTP
Well, today we suggest users to mount a shared storage to multiple hosts that reside in different oVirt/RHV deployments so they could export VMs/templates as OVAs to that shared storage and import these OVAs from
Am 30.05.2018 um 18:14 hat Arik Hadas geschrieben: the
shared storage to a destination deployment. This process involves only #1 and #2.
The technique you proposed earlier for writing disks directly into an OVA, assuming that the target size can be retrieved with 'qemu-img measure', sounds like a nice approach to accelerate this process. I think we should really consider doing that if that's as easy as it sounds.
Writing the image to a given offset in a file is the example that I gave further down in the mail:
You added another host into the mix, which just receives the image content via NBD and then re-exports it as HTTP. Does this host actually exist or is it the same host where the original images are located?
Because if you stay local for this step, there is no need to use NBD at all:
$ ./qemu-img measure -O qcow2 ~/images/hd.img required size: 67436544 fully allocated size: 67436544 $ ./qemu-img create -f file /tmp/test.qcow2 67436544 Formatting '/tmp/test.qcow2', fmt=file size=67436544 $ ./qemu-img convert -n --target-image-opts ~/images/hd.img driver=raw,file.driver=file,file.filename=/tmp/test.qcow2,offset=65536
hexdump verifies that this does the expected thing.
But #3 is definitely something we are interested in because we expect the next step to be exporting the OVAs to a remote instance of Glance that serves as a shared repository for the different deployments. Being able to stream the collapsed form of a volume chain without writing anything to the storage device would be fantastic. I think that even at the expense of iterating the chain twice - once to map the structure of the jump tables (right?) and once to stream the whole data.
If the target is not a stupid web browser, but something actually virt-related like Glance, I'm sure it can offer a more suitable protocol than HTTP? If you could talk NBD to Glance, you'd get rid of the streaming requirement. I think it would make more sense to invest the effort there.
Kevin

On Wed, Jul 11, 2018 at 11:30:19AM +0300, Arik Hadas wrote:
4. Mount each reserved place for a disk as a loopback device and convert the volume-chain directly to it [1]
nbdkit tar plugin can overwrite a single file inside a tarball, all in userspace and non-root. https://github.com/libguestfs/nbdkit/tree/master/plugins/tar Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html

Dear oVirt gurus, There was a lot of discussion and ideas regarding solving this issue, which I absolutely respect. Still as product consumer, administrator, what is the status of it? Can we and in which version expect to see and test solution or part of it? Is there an approach you could recommend us to try with, while waiting for solution? Would it be possible to me to track it somewhere? Kindly awaiting your reply. Marko Vrgotic ActiveVideo From: Arik Hadas <ahadas@redhat.com> Date: Wednesday, 11 July 2018 at 10:30 To: "Wolf, Kevin" <kwolf@redhat.com>, Eric Blake <eblake@redhat.com>, "Vrgotic, Marko" <M.Vrgotic@activevideo.com>, Nir Soffer <nsoffer@redhat.com>, Richard Jones <rjones@redhat.com>, qemu-block <qemu-block@nongnu.org> Cc: users <users@ovirt.org> Subject: Re: [Qemu-block] [ovirt-users] Libvirt ERROR cannot access backing file after importing VM from OpenStack Better late than never - thank you all for the input, it was very useful! With the ability to measure the collapsed form of a volume chain in the qcow2 format, we managed to simplify and improve the process of creating an OVA significantly. What we do now is: 1. Measure the collapsed qcow2 volumes that would be written to the OVA 2. Create the OVA file accordingly 3. Write the "metadata" of the OVA - headers and OVF, while skipping the reserved places for disks 4. Mount each reserved place for a disk as a loopback device and convert the volume-chain directly to it [1] This saves us from allocating temporary disks and extra copies. That change would hopefully land in one of the next updates of oVirt 4.2. [1] https://github.com/oVirt/ovirt-engine/commit/23230a131af33bfa55a3fe828660c32... On Wed, May 30, 2018 at 7:39 PM Kevin Wolf <kwolf@redhat.com<mailto:kwolf@redhat.com>> wrote: Am 30.05.2018 um 18:14 hat Arik Hadas geschrieben:
On Wed, May 30, 2018 at 6:33 PM, Kevin Wolf <kwolf@redhat.com<mailto:kwolf@redhat.com>> wrote:
I think the problem is that we're talking about two different things in one thread. If I understand correctly, what oVirt does today is:
1. qemu-img convert to create a temporary qcow2 image that merges the whole backing chain in a single file
2. tar to create an temporary OVA archive that contains, amongst others, the temporary qcow2 image. This is a second temporary file.
3. Stream this temporary OVA archive over HTTP
Well, today we suggest users to mount a shared storage to multiple hosts that reside in different oVirt/RHV deployments so they could export VMs/templates as OVAs to that shared storage and import these OVAs from the shared storage to a destination deployment. This process involves only #1 and #2.
The technique you proposed earlier for writing disks directly into an OVA, assuming that the target size can be retrieved with 'qemu-img measure', sounds like a nice approach to accelerate this process. I think we should really consider doing that if that's as easy as it sounds.
Writing the image to a given offset in a file is the example that I gave further down in the mail:
You added another host into the mix, which just receives the image content via NBD and then re-exports it as HTTP. Does this host actually exist or is it the same host where the original images are located?
Because if you stay local for this step, there is no need to use NBD at all:
$ ./qemu-img measure -O qcow2 ~/images/hd.img required size: 67436544 fully allocated size: 67436544 $ ./qemu-img create -f file /tmp/test.qcow2 67436544 Formatting '/tmp/test.qcow2', fmt=file size=67436544 $ ./qemu-img convert -n --target-image-opts ~/images/hd.img driver=raw,file.driver=file,file.filename=/tmp/test.qcow2,offset=65536
hexdump verifies that this does the expected thing.
But #3 is definitely something we are interested in because we expect the next step to be exporting the OVAs to a remote instance of Glance that serves as a shared repository for the different deployments. Being able to stream the collapsed form of a volume chain without writing anything to the storage device would be fantastic. I think that even at the expense of iterating the chain twice - once to map the structure of the jump tables (right?) and once to stream the whole data.
If the target is not a stupid web browser, but something actually virt-related like Glance, I'm sure it can offer a more suitable protocol than HTTP? If you could talk NBD to Glance, you'd get rid of the streaming requirement. I think it would make more sense to invest the effort there. Kevin
participants (7)
-
Arik Hadas
-
Eric Blake
-
Kevin Wolf
-
Nir Soffer
-
Richard W.M. Jones
-
Tomáš Golembiovský
-
Vrgotic, Marko