[ovirt-users] This VM is not managed by the engine

Nir Soffer nsoffer at redhat.com
Mon Oct 19 06:34:13 UTC 2015


On Sun, Oct 18, 2015 at 10:14 PM, Nir Soffer <nsoffer at redhat.com> wrote:
> On Sun, Oct 18, 2015 at 7:00 PM, Jaret Garcia <jaret.garcia at packet.mx> wrote:
>> Hi everyone,
>>
>> Afew weeks ago we had a problem with the SPM and all host in the cluster got
>> stocked in contending, we restarted hosts one by one, and the issue was
>> solved. Howerver we didn't notice that one server even it never stop
>> running, it changed its state some way and then no changes could be done to
>> the VM, we tried to add more RAM and we saw the message "Cannot run VM. This
>> VM is not managed by the engine",
>
> I would open a bug about this, and attach engine and vdsm logs showing the
> timeframe of this event.
>
>> so we ssh the VM an send it to reboot, and
>> once we did that the VM never came back
>
> Sure, if engine does not know this vm, it will never restart it. The
> libvirt vm is not
> persistent, engine is keeping the vm info in the engine database, and keeps the
> vm up on some host.
>
>> , we still see the VM in the engine
>> administration but it does not show any information regarding to network,
>> disk, and so.
>
> Please attach engine db dump to the bug, to understand what is "does not show
> any information"
>
>> We created another VM to replace the services in the one we
>> lost, however we need to recover the files in the lost VM, we believe the
>> image should be in the storage but we haven't found a way to recover it,
>> some time ago we came across a similar situation but at that time it was a
>> NFS data domain, so it was easier for us to go inside the storage server an
>> search for the VM ID to scp the image and mount it somewhere else, this time
>> the storage is iscsi and even we found that the hosts mount the target in
>> /rhev/data-center/mnt/blockSD/   we only see there the active images for the
>> cluster, can anyone point us how we can recover the lost image?  We know the
>> VM ID and the Disk ID from Ovirt.
>
> To recover the images, you need the image id. If you don't see it in the engine
> ui, you can try to search in the engine database.
> (Adding Maor to help with finding the image id in the database)
>
> The pool id can be found on the host in /rhev/data-center - there
> should be one directory,
> its name is the pool id. If you have more than one, use the one which
> is not empty.
>
> # Assuming this value (taken from my test setup)
>
> pool_id = 591475db-6fa9-455d-9c05-7f6e30fb06d5
> image_id = 5b10b1b9-ee82-46ee-9f3d-3659d37e4851
>
> Once you found the image id, do:
>
> # Update lvm metadata daemon
>
> pvscan --cache
>
> # Find the volumes
>
> # lvs -o lv_name,vg_name,tags | awk '/IU_<image_id>/ {print $1,$2}'
> 2782e797-e49a-4364-99d7-d7544a42e939 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318
> 4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318
>
> Now we know that:
> domain_id = 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318
>
> # Activate the lvs
>
> lvchange -ay 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/2782e797-e49a-4364-99d7-d7544a42e939
> lvchange -ay 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
>
> # Find the top volume by running qemu-img info on all the lvs
>
> # qemu-img info
> /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/2782e797-e49a-4364-99d7-d7544a42e939
> image: /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/2782e797-e49a-4364-99d7-d7544a42e939
> file format: qcow2
> virtual size: 8.0G (8589934592 bytes)
> disk size: 0
> cluster_size: 65536
> Format specific information:
>     compat: 0.10
>
> # qemu-img info
> /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
> image: /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
> file format: qcow2
> virtual size: 8.0G (8589934592 bytes)
> disk size: 0
> cluster_size: 65536
> backing file: ../5b10b1b9-ee82-46ee-9f3d-3659d37e4851/2782e797-e49a-4364-99d7-d7544a42e939
> (actual path: /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/../5b10b1b9-ee82-46ee-9f3d-3659d37e4851/2782e797-e49a-4364-99d7-d7544a42e939)
> backing file format: qcow2
> Format specific information:
>     compat: 0.10
>
> The top volume is the one with the largest number of items in the
> "backing file" value.

Correction: using the backing file, you can see the parent of each volume.

The volume without the backing file is the base volume. The top volume
is the volume which
is not parent of any other volume.

Here is an example with 3 volumes:

# qemu-img info
/dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/2782e797-e49a-4364-99d7-d7544a42e939
image: /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/2782e797-e49a-4364-99d7-d7544a42e939
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 0
cluster_size: 65536
Format specific information:
    compat: 0.10

This is the base volume.

# qemu-img info
/dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
image: /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 0
cluster_size: 65536
backing file: ../5b10b1b9-ee82-46ee-9f3d-3659d37e4851/2782e797-e49a-4364-99d7-d7544a42e939
(actual path: /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/../5b10b1b9-ee82-46ee-9f3d-3659d37e4851/2782e797-e49a-4364-99d7-d7544a42e939)
backing file format: qcow2
Format specific information:
    compat: 0.10

This volume parent is 2782e797-e49a-4364-99d7-d7544a42e939 (the base volume)

# qemu-img info
/dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/9de6a73e-49a6-45e6-b1aa-bc85e630bf39
image: /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/9de6a73e-49a6-45e6-b1aa-bc85e630bf39
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 0
cluster_size: 65536
backing file: ../5b10b1b9-ee82-46ee-9f3d-3659d37e4851/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
(actual path: /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/../5b10b1b9-ee82-46ee-9f3d-3659d37e4851/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12)
backing file format: qcow2
Format specific information:
    compat: 0.10

This volume parent is 4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12 (the volume above)

So this is the top volume, which can be used to copy the volume data
with qemu-img convert.

Another way to find this info is using the volume metadata in
/dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/metadata
but it may be stale; the canonical source of information is the qcow
image reported by qemu-img.

An easier way to get the information, is using getVolumesList - but
this uses vdsm metadata,
which may be stale (in disaster recovery context).

# vdsClient -s 0 getVolumesList 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318
591475db-6fa9-455d-9c05-7f6e30fb06d5
5b10b1b9-ee82-46ee-9f3d-3659d37e4851
9de6a73e-49a6-45e6-b1aa-bc85e630bf39 : Parent is
4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
2782e797-e49a-4364-99d7-d7544a42e939 :
{"DiskAlias":"test_Disk1","DiskDescription":""}.
4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12 : Parent is
2782e797-e49a-4364-99d7-d7544a42e939

In ovirt 3.6, there is even easier way, using vdsm-tool
dump-volumes-chains. This is based on
getVolumesList, so it vdsm metadata is broken, you should use the
lower level qemu-img info.

# vdsm-tool dump-volume-chains 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318

Images volume chains (base volume first)

   image:    55b41fbd-5e22-4f9d-b72f-aa7af9d7ccb8

             - 78f22775-916c-4e72-8c5b-9917734b26da
               status: OK, voltype: SHARED, format: COW, legality:
LEGAL, type: SPARSE


   image:    5b10b1b9-ee82-46ee-9f3d-3659d37e4851

             - 2782e797-e49a-4364-99d7-d7544a42e939
               status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE

             - 4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
               status: OK, voltype: INTERNAL, format: COW, legality:
LEGAL, type: SPARSE

             - 9de6a73e-49a6-45e6-b1aa-bc85e630bf39
               status: OK, voltype: LEAF, format: COW, legality:
LEGAL, type: SPARSE


   image:    7ea9086c-c82d-405b-aabc-2d66f2106f6d

             - 3f400d56-4412-439c-af43-f379bb5160af
               status: OK, voltype: LEAF, format: RAW, legality:
LEGAL, type: PREALLOCATED


   image:    8cd92346-555f-4a87-8415-ed681dc7a0a7

             - cc56eca9-26c5-4428-8797-b3e7fa7a0c89
               status: OK, voltype: LEAF, format: RAW, legality:
LEGAL, type: PREALLOCATED

> In this case, it is
> /dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
>
> So:
> volume_id = 4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
>
> # Prepare the image to create the links in /rhev/data-center
>
> In a perfect wold, we could use the path to the lv /dev/vgname/lvname,
> but the relative path
> used by qemu is based on the directories and symbolic links created
> inside /rhev/data-center
> the easier way to created them is by preparing the image.
>
> # vdsClient -s 0 prepareImage 591475db-6fa9-455d-9c05-7f6e30fb06d5
> 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318
> 5b10b1b9-ee82-46ee-9f3d-3659d37e4851
> 4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
> {'domainID': '6c77adb1-74fc-4fa9-a0ac-3b5a4b789318',
>  'imageID': '5b10b1b9-ee82-46ee-9f3d-3659d37e4851',
>  'leaseOffset': 113246208,
>  'leasePath': '/dev/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/leases',
>  'path': '/rhev/data-center/mnt/blockSD/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/images/5b10b1b9-ee82-46ee-9f3d-3659d37e4851/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12',
>  'volType': 'path',
>  'volumeID': '4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12'}
>
> # Copy the volume data to some file system
>
> I'm using raw, you may like to use qcow2
>
> cd <some mountpoint>
> qemu-img convert -p -O raw
> /rhev/data-center/mnt/blockSD/6c77adb1-74fc-4fa9-a0ac-3b5a4b789318/images/5b10b1b9-ee82-46ee-9f3d-3659d37e4851/4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
> saved-disk.img
>
> # Teardown the image
>
> # vdsClient -s 0 teardownImage 591475db-6fa9-455d-9c05-7f6e30fb06d5
> 6c77adb1-74fc-4fa9-a0ac-3b5a4b789318
> 5b10b1b9-ee82-46ee-9f3d-3659d37e4851
> 4bc34865-64b8-4a6c-b2d0-0aaab3f2aa12
> OK
>
> # Check the saved image
>
> # qemu-img info saved-disk.img
> image: saved-disk.img
> file format: raw
> virtual size: 8.0G (8589934592 bytes)
> disk size: 1.2G
>
> You can mount this image and copy files, or copy the data to an empty
> disk you created for the new vm.
>
> Nir
>
>>
>> Our Setup
>> ovirt version: 3.5.4 hosted engine
>> 4 supermicro hosts running centos 7.1
>> 1 iscsi storage server running Open-E DSS v7 Lite
>>
>> Thanks in advance
>>
>> Jaret
>> Email sent using Packet Mail - Email, Groupware and Calendaring for the
>> cloud!
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>



More information about the Users mailing list