[ovirt-users] Snapshot is "Locked" - Can't do anything with the VM
Ivo Rütsche
lists.ovirt.org at direct.ruetsche.com
Wed May 18 14:50:06 UTC 2016
Hi Nathanaël
I try it today and the snapshot was in the state "OK" after I execute
the script. I try to remove the snapshot again, but it never do
something. So after I click "Delete", i have this situation:
Tasks(1):
- (Hourglass) Removing Snapshot Auto-generated for Live Storage
Migration of VM xxxx
-- (Completed) Validating
-- (Hourglass) Executing
The snapshot is in the status "Locked"
This status remains for ever.
When I restart the engine, the Status is still in the status "Locked",
but the task no longer exists.
engine.log:
2016-05-18 16:33:07,532 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotCommand] (default task-23)
[53281b57] Lock Acquired to object
'EngineLock:{exclusiveLocks='[2acfe3fa-be69-46da-9add-c748b2bd6034=<VM,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-05-18 16:33:08,640 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotCommand] (default task-23)
[53281b57] Running command: RemoveSnapshotCommand internal: false.
Entities affected : ID: 2acfe3fa-be69-46da-9add-c748b2bd6034 Type:
VMAction group MANIPULATE_VM_SNAPSHOTS with role type USER
2016-05-18 16:33:08,984 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotCommand] (default task-23)
[53281b57] Lock freed to object
'EngineLock:{exclusiveLocks='[2acfe3fa-be69-46da-9add-c748b2bd6034=<VM,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-05-18 16:33:10,079 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [] Correlation ID: 53281b57, Job ID:
58b18e63-2b7f-480c-84ea-260508aa8798, Call Stack: null, Custom Event ID:
-1, Message: Snapshot 'Auto-generated for Live Storage Migration'
deletion for VM 'xxxx' was initiated by admin at internal.
....but nothing else....
On the node side, i can't find anything in the logs (or i looking on the
wrong place).
When the VM is not running, I can remove the snapshot without any
problems. The VM images are on a NFS export from a Solaris (ZFS).
gruss Ivo
On 04.05.2016 13:40, Nathanaël Blanchet wrote:
> Unlock procedure
>
> * QUERY : -q
>
> PGPASSWORD=XXXXX
> /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -q -t snapshot
> -u engine
> 296c010e-3c1d-4008-84b3-5cd39cff6aa1 |
> 525a4dda-dbbb-4872-a5f1-8ac2aed48392
>
> * REMOVE
>
> PGPASSWORD=XXXXX
> /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t snapshot -u
> engine 525a4dda-dbbb-4872-a5f1-8ac2aed48392
>
> Ref :
> http://lists.ovirt.org/pipermail/users/2015-November/035686.html
>
> Le 03/05/2016 21:17, Ivo Rütsche a écrit :
>>
>> I just do a full clone of the whole VM and the clone is running fine
>> and have the last data on all disks.
>>
>> gruss ivo
>>
>>
>> On 03.05.2016 20:12, Markus Stockhausen wrote:
>>> Hi,
>>>
>>> we've seen a lot of trouble when deleting snapshots of multidisk VMs.
>>> Especially endless running snapshot deletions is one of our culprits.
>>> More here: https://bugzilla.redhat.com/show_bug.cgi?id=1319400
>>>
>>> Markus
>>> ________________________________________
>>> Von: users-bounces at ovirt.org [users-bounces at ovirt.org]" im
>>> Auftrag von "Ivo Rütsche [lists.ovirt.org at direct.ruetsche.com]
>>> Gesendet: Dienstag, 3. Mai 2016 19:23
>>> An: users at ovirt.org
>>> Betreff: [ovirt-users] Snapshot is "Locked" - Can't do anything with
>>> the VM
>>>
>>> Hi oVirt List
>>>
>>> We have the problem, that two of our VM's have "Locked" snapshots
>>> and we
>>> can't go forward. From 10 VM's which we move from one store to another,
>>> we have problems with 5 of them to delete the snapshots. But with a lot
>>> of luck, the snapshot was always on the dedicated Swap-Disk, so we just
>>> have to remove the whole disk and create it again as new.
>>>
>>> Because we replace all the filers, we have to move the VM's arround.
>>> The
>>> move itself runns fine, but deleting the snapshots doesn't work
>>> sometimes.
>>>
>>> I found a lot about our problem, but I can't find any solution, which
>>> works an our environment.
>>>
>>> Our setup:
>>> - 3 Solaris filer with NFS exports for the hosts for the vm-images
>>> - 8 hosts (oVirt 3.6 / Centos 7.2.1511)
>>> - The engine host is located outside of the DC, connected via VPN
>>>
>>> At the moment, one of the "problem-VM" is productive, the other should
>>> become productive in the next days, so we focus us to the 2nd VM,
>>> because we can play arround with them. The other one is startet up, but
>>> i have no idea, what happens, when i shut them down...
>>>
>>> This is the status today of VM, which is not productive at the moment:
>>> - VM Status: Down
>>> - Snapshots OK: 2
>>> - Snapshots Locked: 1
>>> - In the snapshot tab, everything is grayed out (Create, Preview,
>>> Commit, Undo etc.)
>>> - No active tasks are displayed
>>>
>>> Error message when i try to start the VM:
>>> "Cannot run VM. The VM is performing an operation on a Snapshot. Please
>>> wait for the operation to finish, and try again."
>>>
>>> Something about the history of this VM:
>>> 1. We start the disk move from filer1 to filer3, totaly 4 drives,
>>> everything was fine
>>> 2. Deleting snapshots, i think one or two runns fine
>>> 3. Deleting the 2nd or 3rd snapshot runs two days without an end
>>> 4. Rebooting the Engine
>>> 5. No task anymore in the Tasks list
>>> 6. The status of the snapshot is "Locked", die Disk of this snapshot
>>> becoms "Illegal" and red
>>> 7. Shutdown the VM - no luck, i can't start it again --> "Cannot run
>>> VM. The VM is performing an operation on a Snapshot. Please wait for
>>> the
>>> operation to finish, and try again."
>>> 8. We move the disk back from filer3 to the filer1
>>> 9. The status of the snapshot is "Locked", die Disk of this snapshot
>>> becoms "OK" and green
>>> ...but i can't delete the snapshot or starting the VM :(
>>>
>>> Today, i copy the image files to a dedicated KVM host, which is not
>>> involved in the oVirt environment and i merge the image manualy
>>> together:
>>> qemu-img convert -p -f qcow2 804ffcc3-446f-4c56-ba99-ab90c7951a6f -O
>>> raw
>>> Merged1
>>>
>>> I mount the image and everything fine, all the changes from the last
>>> two
>>> days are now in a base image. I try to remove the whole disk in oVirt,
>>> but i still got the error message, that i have to wait. So for me, the
>>> next step is to create a new VM and migrate everything manualy, this
>>> will work, but can't be the solution for the other VM, which is
>>> productive and it have the same problem.
>>>
>>>
>>>
>>> Some information about:
>>>
>>> *****************
>>> [root at lin-ovirt7 images]# vdsm-tool dump-volume-chains
>>> d6e6656d-7aaf-4c5b-bb93-cd6134ed9654
>>>
>>> Images volume chains (base volume first)
>>>
>>> image: f3cfefd2-d467-41d5-a054-5b01f81a21b4
>>>
>>> - a018521a-bd3a-4322-ac79-52d15e95639e
>>> status: OK, voltype: INTERNAL, format: RAW, legality:
>>> LEGAL, type: SPARSE
>>>
>>> - 2e2f0b4a-b015-4b50-8d10-51726de460a9
>>> status: OK, voltype: INTERNAL, format: COW, legality:
>>> LEGAL, type: SPARSE
>>>
>>> - 804ffcc3-446f-4c56-ba99-ab90c7951a6f
>>> status: OK, voltype: LEAF, format: COW, legality:
>>> LEGAL,
>>> type: SPARSE
>>> *****************
>>> [root at lin-ovirt7 images]# ls -altrh
>>> f3cfefd2-d467-41d5-a054-5b01f81a21b4/
>>> total 34G
>>> -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03
>>> a018521a-bd3a-4322-ac79-52d15e95639e.lease
>>> -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03
>>> 2e2f0b4a-b015-4b50-8d10-51726de460a9.lease
>>> -rw-r--r--+ 1 vdsm kvm 260 2. Mai 21:03
>>> 804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta
>>> -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03
>>> 804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease
>>> -rw-rw----+ 1 vdsm kvm 300G 2. Mai 22:26
>>> a018521a-bd3a-4322-ac79-52d15e95639e
>>> -rw-rw----+ 1 vdsm kvm 77M 2. Mai 22:27
>>> 2e2f0b4a-b015-4b50-8d10-51726de460a9
>>> -rw-rw----+ 1 vdsm kvm 3.5G 2. Mai 22:34
>>> 804ffcc3-446f-4c56-ba99-ab90c7951a6f
>>> -rw-r--r--+ 1 vdsm kvm 364 2. Mai 22:34
>>> a018521a-bd3a-4322-ac79-52d15e95639e.meta
>>> -rw-r--r--+ 1 vdsm kvm 264 2. Mai 22:34
>>> 2e2f0b4a-b015-4b50-8d10-51726de460a9.meta
>>> drwxr-xr-x+ 2 vdsm kvm 11 2. Mai 22:34 .
>>> drwxr-xr-x+ 50 vdsm kvm 50 3. Mai 14:30 ..
>>> [root at lin-ovirt7 images]#
>>> *****************
>>>
>>> I attach some line from the engine.log, maybe someone can help, if you
>>> need more information everything is here.
>>>
>>> gruss ivo
>>>
>>>
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax 33 (0)4 67 54 84 14
> blanchet at abes.fr
More information about the Users
mailing list