Libvirt ERROR cannot access backing file after importing VM from OpenStack
by Vrgotic, Marko
Dear oVirt team,
When trying to start imported VM, it fails with following message:
ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
Platform details:
Ovirt SHE
Version 4.2.2.6-1.el7.centos
GlusterFS, unmanaged by oVirt.
VM is imported & converted from OpenStack, according to log files, successfully (one WARN, related to different MAC address):
2018-05-24 12:03:31,028+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e
2018-05-24 12:48:33,722+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand] (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM [instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM [instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM [instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5
2018-05-24 12:48:47,291+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 4c445109
2018-05-24 12:48:47,318+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand] (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], log id: 4c445109
2018-05-24 12:49:20,466+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:20,586+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of its MAC pool definitions.
2018-05-24 12:49:21,021+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653408) [14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 to Data Center AVEUNL, Cluster AWSEUOPS
2018-05-24 12:49:28,816+02 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
2018-05-24 12:49:28,911+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, ConvertVmVDSCommand(HostName = aws-ovhv-01, ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', url='qemu+tcp://root@172.19.0.12/system', username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', vmName='instance-00000673', storageDomainId='2607c265-248c-40ad-b020-f3756454839e', storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 53408517
2018-05-24 12:49:29,010+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm instance-00000673
2018-05-24 12:52:57,982+02 INFO [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) [df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}'
2018-05-24 12:59:24,575+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully to Data Center AVEUNL, Cluster AWSEUOPS
Than trying to start VM fails with following messages:
2018-05-24 13:00:32,085+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz (Host: aws-ovhv-06).
2018-05-24 13:00:33,417+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 'WaitForLaunch' --> 'Down'
2018-05-24 13:00:33,436+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 is down with error. Exit message: Cannot access backing file '/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of storage file '/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba' (as uid:107, gid:107): No such file or directory.
2018-05-24 13:00:33,437+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-9) [] add VM '1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment
2018-05-24 13:00:33,455+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host aws-ovhv-06.
2018-05-24 13:00:33,460+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673 (User: admin@internal-authz).
Checking on the Gluster volume, directory and files exist, permissions are in order:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]
-rw-rw----. 1 vdsm kvm 14G May 24 12:59 8ecfcd5b-db67-4c23-9869-0e20d7553aba
-rw-rw----. 1 vdsm kvm 1.0M May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease
-rw-r--r--. 1 vdsm kvm 310 May 24 12:49 8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta
Than I have checked image info, and noticed that backing file entry is pointing to non-existing location, which does and should not exist on oVirt hosts:
[root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 8ecfcd5b-db67-4c23-9869-0e20d7553aba
image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba
file format: qcow2
virtual size: 160G (171798691840 bytes)
disk size: 14G
cluster_size: 65536
backing file: /var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
Can somebody advise me how to fix, address this, as I am in need of importing 200+ VMs from OpenStack to oVirt?
Kindly awaiting your reply.
Marko Vrgotic
5 years, 6 months
Change host names/IPs
by Davide Ferrari
Hello
Is there a clean way and possibly without downtime to change the hostname
and IP addresses of all the hosts in a running oVirt cluster?
--
Davide Ferrari
Senior Systems Engineer
5 years, 6 months
ovirt 4.3.3 SELINUX issue
by Strahil Nikolov
Hello All,
I want to warn you that selinux-policy & selinux-policy-targeted with version '3.13.1-229.el7_6.12' cause an issue with my HostedEngine where I got "Login incorrect" screen of death.
I have also raised a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1710083
If you want to test your luck, increase the default grub timeout in "/etc/default/grub" to 15 and rebuild the grub menu via 'grub2-mkconfig -o /boot/grub2/grub.cfg'
If the issue occurs to you - just append 'enforcing=0' to your kernel and the issue will be over.Of course, you can always rollback either from a rescue DVD or from the running 'enforcing=0' system.
Best Regards,Strahil Nikolov
5 years, 6 months
oVirt and Windows NFS
by ron@prgstudios.com
Hi Everyone - Greetings from Detroit Michigan!
Has anyone had any luck with using oVirt to connect to a Windows Server running an NFS service?
I've made several attempts so far and am having problems getting the two to work together. I was hoping someone had some hints (besides the obvious "use Linux" bit).
Thanks!
Ron
5 years, 6 months
Is teamd supported on 4.3?
by Valentin Bajrami
Hello guys,
Next week, I'm planning to deploy ovirt-node 4.3 on a few hosts. I've
been running bonds for the past years but I'd like to know if teaming
(teamd) is also supported with this version.
My current package version(s):
OS Version: | RHEL - 7 - 6.1810.2.el7.centos
OS Description: | oVirt Node 4.3.1
Kernel Version: | 3.10.0 - 957.5.1.el7.x86_64
KVM Version: | 2.12.0 - 18.el7_6.3.1
LIBVIRT Version: | libvirt-4.5.0-10.el7_6.4
VDSM Version: | vdsm-4.30.9-1.el7
SPICE Version: | 0.14.0 - 6.el7_6.1
GlusterFS Version: | glusterfs-5.3-2.el7
CEPH Version: | librbd1-10.2.5-4.el7
Open vSwitch Version: | openvswitch-2.10.1-3.el7
Kernel Features: | PTI: 1, IBRS: 0, RETP: 1
VNC Encryption: | Disabled
Is anyone running teamd on this version ?
Thanks in advance
--
Kind regards / Met vriendelijke groet,
Valentin Bajrami
Target Holding
5 years, 6 months
botched 3.6 -> 4.0/1/2 upgrade, how to recover
by Axel.Thimm@01lgc.com
Hello,
an old 3.6 self-hosted one-system installation was half upgraded to 4.0 and I took on the task to continue upgrading it.
I managed to setup a new engine and upgraded step by step until 4.2 where the host needs to be upgraded. During that time the host was "non-responsive", but I though that this had to do with the upgrade process.
Upon trying to upgrade the host the setup broke to pieces. I tried to downgrade everything and restart the old engine to no avail. I now try the way forward to try to start the new engine or create a complete new one, I don;t care much about the old engine's inherited information - all I need to to resurrect the VMs again. At the end of the day the setup is supposed to migrate to a three system hyperconverged setup anyway.
How should I proceed to get a working state that fires up the VMs? Is it safe to install a hosted engine from scratch and reattach the domain? Probably the VM configuration will be lost and I will have to puzzle together the disks.
I still have the 3.6 engine's backup file, if that is of any help. Should I perhaps recreate a 4.0 engine with that file and try to continue from there?
I guess all information on which VMs are attached to which images etc are in the DB of the engine, so I either have to get this info off the 3.6 backup or wire them up again by inspection.
BTW this is not a pure ovirt, but an RHV with self-evaluation support, but I believe that any pure ovirt solution will apply.
Many thanks in advance.
5 years, 6 months
Re: Is teamd supported on 4.3?
by Strahil
I'm using teaming and I don't see issues.
Just cannot control the teaming device.
Best Regards,
Strahil NikolovOn May 13, 2019 22:26, Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Mon, 13 May 2019 15:30:11 +0200
> Valentin Bajrami <valentin.bajrami(a)target-holding.nl> wrote:
>
> > Hello guys,
> >
> > Next week, I'm planning to deploy ovirt-node 4.3 on a few hosts. I've
> > been running bonds for the past years but I'd like to know if teaming
> > (teamd) is also supported with this version.
> >
>
> No, unfortunately not.
> May I ask why you want to use teaming instead of bonding?
>
>
> > My current package version(s):
> >
> > OS Version: | RHEL - 7 - 6.1810.2.el7.centos
> > OS Description: | oVirt Node 4.3.1
> > Kernel Version: | 3.10.0 - 957.5.1.el7.x86_64
> > KVM Version: | 2.12.0 - 18.el7_6.3.1
> > LIBVIRT Version: | libvirt-4.5.0-10.el7_6.4
> > VDSM Version: | vdsm-4.30.9-1.el7
> > SPICE Version: | 0.14.0 - 6.el7_6.1
> > GlusterFS Version: | glusterfs-5.3-2.el7
> > CEPH Version: | librbd1-10.2.5-4.el7
> > Open vSwitch Version: | openvswitch-2.10.1-3.el7
> > Kernel Features: | PTI: 1, IBRS: 0, RETP: 1
> > VNC Encryption: | Disabled
> >
> > Is anyone running teamd on this version ?
> >
> > Thanks in advance
> >
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XVFQQWUJK3Q...
5 years, 6 months