[ Adding qemu-block ]
Am 27.05.2018 um 10:36 hat Arik Hadas geschrieben:
Not sure which operation we're talking about on the QEMU level, but> On Thu, May 24, 2018 at 6:13 PM, Nir Soffer <nsoffer@redhat.com> wrote:
>
> > On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko <M.Vrgotic@activevideo.com>
> > wrote:
> >
> >> Dear Nir,
> >>
> >> Thank you for quick reply.
> >>
> >> Ok, why it will not work?
> >>
> >
> > Because the image has a backing file which is not accessible to oVirt.
> >
> >
> >> I used qemu+tcp connection, via import method through engine admin UI.
> >>
> >> Images was imported and converted according logs, still “backing file”
> >> invalid entry remained.
> >>
> >> Also, I did use same method before, connecting to plain “libvirt kvm”
> >> host, import and conversion went smooth, no backend file.
> >>
> >> Image format is qcow(2) which is supported by oVirt.
> >>
> >> What am I missing? Should I use different method?
> >>
> >
> > I guess this is not a problem on your side, but a bug in our side.
> >
> > Either we should block the operation that cannot work, or fix the process
> > so we don't refer to non-existing image.
> >
> > When importing we have 2 options:
> >
> > - import the entire chain, importing all images in the chain, converting
> > each image to oVirt volume, and updating the backing file of each layer
> > to point to the oVirt image.
> >
> > - import the current state of the image into a new image, using either raw
> > or qcow2, but without any backing file.
> >
> > Arik, do you know why we create qcow2 file with invalid backing file?
> >
>
> It seems to be a result of a bit naive behavior of the kvm2ovirt module
> that tries to download only the top-level volume the VM uses, assuming each
> of the disks to be imported is comprised of a single volume.
>
> Maybe it's time to finally asking QEMU guys to provide a way to consume the
> 'collapsed' form of a chain of volumes as a stream if that's not available
> yet? ;) It can also boost the recently added process of exporting VMs as
> OVAs...
generally the "collapsed" view is the normal thing because that's what
guests see.
For example, if you use 'qemu-img convert', you have to pass options to
specifically disable it and convert only a single layer if you want to
keep using backing files instead of getting a standalone image that
contains everything.
Kevin
>
> >
> > Nir
> >
> >
> >>
> >> Kindly awaiting your reply.
> >>
> >> — — —
> >> Met vriendelijke groet / Best regards,
> >>
> >> Marko Vrgotic
> >> Sr. System Engineer
> >> ActiveVideo
> >>
> >> Tel. +31 (0)35 677 4131 <+31%2035%20677%204131>
> >> email: m.vrgotic@activevideo.com
> >> skype: av.mvrgotic.se
> >> www.activevideo.com
> >> ------------------------------
> >> *From:* Nir Soffer <nsoffer@redhat.com>
> >> *Sent:* Thursday, May 24, 2018 4:09:40 PM
> >> *To:* Vrgotic, Marko
> >> *Cc:* users@ovirt.org; Richard W.M. Jones; Arik Hadas
> >> *Subject:* Re: [ovirt-users] Libvirt ERROR cannot access backing file
> >> *backing file:> >> after importing VM from OpenStack
> >>
> >>
> >>
> >> On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko <M.Vrgotic@activevideo.com>
> >> wrote:
> >>
> >> Dear oVirt team,
> >>
> >>
> >>
> >> When trying to start imported VM, it fails with following message:
> >>
> >>
> >>
> >> ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling. AuditLogDirector]
> >> (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM
> >> instance-00000673 is down with error. Exit message: Cannot access backing
> >> file '/var/lib/nova/instances/_base/ 2f4f8c5fc11bb83bcab03f4c829ddd a4da8c0bce'
> >> of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.
> >> lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/
> >> 816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67- 4c23-9869-0e20d7553aba'
> >> (as uid:107, gid:107): No such file or directory.
> >>
> >>
> >>
> >> Platform details:
> >>
> >> Ovirt SHE
> >>
> >> Version 4.2.2.6-1.el7.centos
> >>
> >> GlusterFS, unmanaged by oVirt.
> >>
> >>
> >>
> >> VM is imported & converted from OpenStack, according to log files,
> >> successfully (one WARN, related to different MAC address):
> >>
> >> 2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.
> >> vdsbroker.vdsbroker.GetVmsNamesFromExternalProvide rVDSCommand] (default
> >> task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH,
> >> GetVmsNamesFromExternalProviderVDSCommand, return: [VM
> >> [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM
> >> [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM
> >> [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log
> >> id: 7f178a5e
> >>
> >> 2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.
> >> vdsbroker.vdsbroker.GetVmsNamesFromExternalProvide rVDSCommand] (default
> >> task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH,
> >> GetVmsNamesFromExternalProviderVDSCommand, return: [VM
> >> [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM
> >> [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM
> >> [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log
> >> id: 3aa178c5
> >>
> >> 2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.
> >> vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProv iderVDSCommand]
> >> (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START,
> >> GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01,
> >> GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f- 4c4b-be9c-49154953564d',
> >> url='qemu+tcp://root@172.19.0.12/system ', username='null',
> >> originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109
> >>
> >> 2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.
> >> vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProv iderVDSCommand]
> >> (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH,
> >> GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM
> >> [instance-00000673]], log id: 4c445109
> >>
> >> 2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.
> >> ImportVmFromExternalProviderCommand] (default task-41)
> >> [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object
> >> 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME,
> >> 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
> >>
> >> 2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.
> >> dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engine-Thread-653408)
> >> [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID:
> >> MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es)
> >> fa:16:3e:74:18:50, which is/are out of its MAC pool definitions.
> >>
> >> 2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.
> >> dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engine-Thread-653408)
> >> [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID:
> >> IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm
> >> instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS
> >>
> >> 2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.
> >> ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory- engine-Thread-653407)
> >> [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME,
> >> 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
> >>
> >> 2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.
> >> vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory- commandCoordinator-Thread-2)
> >> [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01,
> >> ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b- be9c-49154953564d',
> >> url='qemu+tcp://root@172.19.0.12/system ', username='null',
> >> vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673',
> >> storageDomainId='2607c265-248c-40ad-b020-f3756454839e',
> >> storagePoolId='5a5de92c-0120-0167-03cb-00000000038a',
> >> virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}),
> >> log id: 53408517
> >>
> >> 2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.
> >> dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- commandCoordinator-Thread-2)
> >> [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting
> >> to convert Vm instance-00000673
> >>
> >> 2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand]
> >> (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to
> >> Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]',
> >> sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}'
> >>
> >> 2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.
> >> dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engineScheduled-Thread-20)
> >> [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673
> >> was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS
> >>
> >>
> >>
> >> Than trying to start VM fails with following messages:
> >>
> >> 2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.
> >> dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engine-Thread-653729)
> >> [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by
> >> admin@internal-authz (Host: aws-ovhv-06).
> >>
> >> 2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.
> >> vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM
> >> '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance- 00000673) moved from
> >> 'WaitForLaunch' --> 'Down'
> >>
> >> 2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.
> >> dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) []
> >> EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit
> >> message: Cannot access backing file '/var/lib/nova/instances/_base/
> >> 2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file
> >> '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.
> >> lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/
> >> 816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67- 4c23-9869-0e20d7553aba'
> >> (as uid:107, gid:107): No such file or directory.
> >>
> >> 2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.
> >> vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM
> >> '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance- 00000673) to rerun
> >> treatment
> >>
> >> 2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.
> >> dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engine-Thread-653732)
> >> [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM
> >> instance-00000673 on Host aws-ovhv-06.
> >>
> >> 2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.
> >> dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory- engine-Thread-653732)
> >> [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673
> >> (User: admin@internal-authz).
> >>
> >>
> >>
> >> Checking on the Gluster volume, directory and files exist, permissions
> >> are in order:
> >>
> >>
> >>
> >> [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]
> >>
> >> -rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-
> >> 0e20d7553aba
> >>
> >> -rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-
> >> 0e20d7553aba.lease
> >>
> >> -rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-
> >> 0e20d7553aba.meta
> >>
> >>
> >>
> >> Than I have checked image info, and noticed that backing file entry is
> >> pointing to non-existing location, which does and should not exist on oVirt
> >> hosts:
> >>
> >>
> >>
> >> [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info
> >> 8ecfcd5b-db67-4c23-9869-0e20d7553aba
> >>
> >> image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba
> >>
> >> file format: qcow2
> >>
> >> virtual size: 160G (171798691840 bytes)
> >>
> >> disk size: 14G
> >>
> >> cluster_size: 65536
> >>
> >> /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddd a4da8c0bce*
> >>
> >> Format specific information:
> >>
> >> compat: 1.1
> >>
> >> lazy refcounts: false
> >>
> >> refcount bits: 16
> >>
> >> corrupt: false
> >>
> >>
> >>
> >> Can somebody advise me how to fix, address this, as I am in need of
> >> importing 200+ VMs from OpenStack to oVirt?
> >>
> >>
> >> Sure this qcow2 file will not work in oVirt.
> >>
> >> I wonder how you did the import?
> >>
> >> Nir
> >>
> >>
> >