Snapshot is "Locked" - Can't do anything with the VM

Hi oVirt List We have the problem, that two of our VM's have "Locked" snapshots and we can't go forward. From 10 VM's which we move from one store to another, we have problems with 5 of them to delete the snapshots. But with a lot of luck, the snapshot was always on the dedicated Swap-Disk, so we just have to remove the whole disk and create it again as new. Because we replace all the filers, we have to move the VM's arround. The move itself runns fine, but deleting the snapshots doesn't work sometimes. I found a lot about our problem, but I can't find any solution, which works an our environment. Our setup: - 3 Solaris filer with NFS exports for the hosts for the vm-images - 8 hosts (oVirt 3.6 / Centos 7.2.1511) - The engine host is located outside of the DC, connected via VPN At the moment, one of the "problem-VM" is productive, the other should become productive in the next days, so we focus us to the 2nd VM, because we can play arround with them. The other one is startet up, but i have no idea, what happens, when i shut them down... This is the status today of VM, which is not productive at the moment: - VM Status: Down - Snapshots OK: 2 - Snapshots Locked: 1 - In the snapshot tab, everything is grayed out (Create, Preview, Commit, Undo etc.) - No active tasks are displayed Error message when i try to start the VM: "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for the operation to finish, and try again." Something about the history of this VM: 1. We start the disk move from filer1 to filer3, totaly 4 drives, everything was fine 2. Deleting snapshots, i think one or two runns fine 3. Deleting the 2nd or 3rd snapshot runs two days without an end 4. Rebooting the Engine 5. No task anymore in the Tasks list 6. The status of the snapshot is "Locked", die Disk of this snapshot becoms "Illegal" and red 7. Shutdown the VM - no luck, i can't start it again --> "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for the operation to finish, and try again." 8. We move the disk back from filer3 to the filer1 9. The status of the snapshot is "Locked", die Disk of this snapshot becoms "OK" and green ...but i can't delete the snapshot or starting the VM :( Today, i copy the image files to a dedicated KVM host, which is not involved in the oVirt environment and i merge the image manualy together: qemu-img convert -p -f qcow2 804ffcc3-446f-4c56-ba99-ab90c7951a6f -O raw Merged1 I mount the image and everything fine, all the changes from the last two days are now in a base image. I try to remove the whole disk in oVirt, but i still got the error message, that i have to wait. So for me, the next step is to create a new VM and migrate everything manualy, this will work, but can't be the solution for the other VM, which is productive and it have the same problem. Some information about: ***************** [root@lin-ovirt7 images]# vdsm-tool dump-volume-chains d6e6656d-7aaf-4c5b-bb93-cd6134ed9654 Images volume chains (base volume first) image: f3cfefd2-d467-41d5-a054-5b01f81a21b4 - a018521a-bd3a-4322-ac79-52d15e95639e status: OK, voltype: INTERNAL, format: RAW, legality: LEGAL, type: SPARSE - 2e2f0b4a-b015-4b50-8d10-51726de460a9 status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, type: SPARSE - 804ffcc3-446f-4c56-ba99-ab90c7951a6f status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE ***************** [root@lin-ovirt7 images]# ls -altrh f3cfefd2-d467-41d5-a054-5b01f81a21b4/ total 34G -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 a018521a-bd3a-4322-ac79-52d15e95639e.lease -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 2e2f0b4a-b015-4b50-8d10-51726de460a9.lease -rw-r--r--+ 1 vdsm kvm 260 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease -rw-rw----+ 1 vdsm kvm 300G 2. Mai 22:26 a018521a-bd3a-4322-ac79-52d15e95639e -rw-rw----+ 1 vdsm kvm 77M 2. Mai 22:27 2e2f0b4a-b015-4b50-8d10-51726de460a9 -rw-rw----+ 1 vdsm kvm 3.5G 2. Mai 22:34 804ffcc3-446f-4c56-ba99-ab90c7951a6f -rw-r--r--+ 1 vdsm kvm 364 2. Mai 22:34 a018521a-bd3a-4322-ac79-52d15e95639e.meta -rw-r--r--+ 1 vdsm kvm 264 2. Mai 22:34 2e2f0b4a-b015-4b50-8d10-51726de460a9.meta drwxr-xr-x+ 2 vdsm kvm 11 2. Mai 22:34 . drwxr-xr-x+ 50 vdsm kvm 50 3. Mai 14:30 .. [root@lin-ovirt7 images]# ***************** I attach some line from the engine.log, maybe someone can help, if you need more information everything is here. gruss ivo

This is a multi-part message in MIME format. ------=_NextPartTM-000-057c70dd-2d57-48b9-ba52-0c304c9b02d3 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi,=0A= =0A= we've seen a lot of trouble when deleting snapshots of multidisk VMs.=0A= Especially endless running snapshot deletions is one of our culprits.=0A= More here: https://bugzilla.redhat.com/show_bug.cgi?id=3D1319400=0A= =0A= Markus=0A= ________________________________________=0A= Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von= "Ivo R=FCtsche [lists.ovirt.org@direct.ruetsche.com]=0A= Gesendet: Dienstag, 3. Mai 2016 19:23=0A= An: users@ovirt.org=0A= Betreff: [ovirt-users] Snapshot is "Locked" - Can't do anything with the VM= =0A= =0A= Hi oVirt List=0A= =0A= We have the problem, that two of our VM's have "Locked" snapshots and we=0A= can't go forward. From 10 VM's which we move from one store to another,=0A= we have problems with 5 of them to delete the snapshots. But with a lot=0A= of luck, the snapshot was always on the dedicated Swap-Disk, so we just=0A= have to remove the whole disk and create it again as new.=0A= =0A= Because we replace all the filers, we have to move the VM's arround. The=0A= move itself runns fine, but deleting the snapshots doesn't work sometimes.= =0A= =0A= I found a lot about our problem, but I can't find any solution, which=0A= works an our environment.=0A= =0A= Our setup:=0A= - 3 Solaris filer with NFS exports for the hosts for the vm-images=0A= - 8 hosts (oVirt 3.6 / Centos 7.2.1511)=0A= - The engine host is located outside of the DC, connected via VPN=0A= =0A= At the moment, one of the "problem-VM" is productive, the other should=0A= become productive in the next days, so we focus us to the 2nd VM,=0A= because we can play arround with them. The other one is startet up, but=0A= i have no idea, what happens, when i shut them down...=0A= =0A= This is the status today of VM, which is not productive at the moment:=0A= - VM Status: Down=0A= - Snapshots OK: 2=0A= - Snapshots Locked: 1=0A= - In the snapshot tab, everything is grayed out (Create, Preview,=0A= Commit, Undo etc.)=0A= - No active tasks are displayed=0A= =0A= Error message when i try to start the VM:=0A= "Cannot run VM. The VM is performing an operation on a Snapshot. Please=0A= wait for the operation to finish, and try again."=0A= =0A= Something about the history of this VM:=0A= 1. We start the disk move from filer1 to filer3, totaly 4 drives,=0A= everything was fine=0A= 2. Deleting snapshots, i think one or two runns fine=0A= 3. Deleting the 2nd or 3rd snapshot runs two days without an end=0A= 4. Rebooting the Engine=0A= 5. No task anymore in the Tasks list=0A= 6. The status of the snapshot is "Locked", die Disk of this snapshot=0A= becoms "Illegal" and red=0A= 7. Shutdown the VM - no luck, i can't start it again --> "Cannot run=0A= VM. The VM is performing an operation on a Snapshot. Please wait for the=0A= operation to finish, and try again."=0A= 8. We move the disk back from filer3 to the filer1=0A= 9. The status of the snapshot is "Locked", die Disk of this snapshot=0A= becoms "OK" and green=0A= ...but i can't delete the snapshot or starting the VM :(=0A= =0A= Today, i copy the image files to a dedicated KVM host, which is not=0A= involved in the oVirt environment and i merge the image manualy together:= =0A= qemu-img convert -p -f qcow2 804ffcc3-446f-4c56-ba99-ab90c7951a6f -O raw=0A= Merged1=0A= =0A= I mount the image and everything fine, all the changes from the last two=0A= days are now in a base image. I try to remove the whole disk in oVirt,=0A= but i still got the error message, that i have to wait. So for me, the=0A= next step is to create a new VM and migrate everything manualy, this=0A= will work, but can't be the solution for the other VM, which is=0A= productive and it have the same problem.=0A= =0A= =0A= =0A= Some information about:=0A= =0A= *****************=0A= [root@lin-ovirt7 images]# vdsm-tool dump-volume-chains=0A= d6e6656d-7aaf-4c5b-bb93-cd6134ed9654=0A= =0A= Images volume chains (base volume first)=0A= =0A= image: f3cfefd2-d467-41d5-a054-5b01f81a21b4=0A= =0A= - a018521a-bd3a-4322-ac79-52d15e95639e=0A= status: OK, voltype: INTERNAL, format: RAW, legality:=0A= LEGAL, type: SPARSE=0A= =0A= - 2e2f0b4a-b015-4b50-8d10-51726de460a9=0A= status: OK, voltype: INTERNAL, format: COW, legality:=0A= LEGAL, type: SPARSE=0A= =0A= - 804ffcc3-446f-4c56-ba99-ab90c7951a6f=0A= status: OK, voltype: LEAF, format: COW, legality: LEGAL,=0A= type: SPARSE=0A= *****************=0A= [root@lin-ovirt7 images]# ls -altrh f3cfefd2-d467-41d5-a054-5b01f81a21b4/= =0A= total 34G=0A= -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03=0A= a018521a-bd3a-4322-ac79-52d15e95639e.lease=0A= -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03=0A= 2e2f0b4a-b015-4b50-8d10-51726de460a9.lease=0A= -rw-r--r--+ 1 vdsm kvm 260 2. Mai 21:03=0A= 804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta=0A= -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03=0A= 804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease=0A= -rw-rw----+ 1 vdsm kvm 300G 2. Mai 22:26=0A= a018521a-bd3a-4322-ac79-52d15e95639e=0A= -rw-rw----+ 1 vdsm kvm 77M 2. Mai 22:27=0A= 2e2f0b4a-b015-4b50-8d10-51726de460a9=0A= -rw-rw----+ 1 vdsm kvm 3.5G 2. Mai 22:34=0A= 804ffcc3-446f-4c56-ba99-ab90c7951a6f=0A= -rw-r--r--+ 1 vdsm kvm 364 2. Mai 22:34=0A= a018521a-bd3a-4322-ac79-52d15e95639e.meta=0A= -rw-r--r--+ 1 vdsm kvm 264 2. Mai 22:34=0A= 2e2f0b4a-b015-4b50-8d10-51726de460a9.meta=0A= drwxr-xr-x+ 2 vdsm kvm 11 2. Mai 22:34 .=0A= drwxr-xr-x+ 50 vdsm kvm 50 3. Mai 14:30 ..=0A= [root@lin-ovirt7 images]#=0A= *****************=0A= =0A= I attach some line from the engine.log, maybe someone can help, if you=0A= need more information everything is here.=0A= =0A= gruss ivo=0A= =0A= =0A= =0A= ------=_NextPartTM-000-057c70dd-2d57-48b9-ba52-0c304c9b02d3 Content-Type: text/plain; name="InterScan_Disclaimer.txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="InterScan_Disclaimer.txt" **************************************************************************** Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. Über das Internet versandte E-Mails können unter fremden Namen erstellt oder manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine rechtsverbindliche Willenserklärung. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln Vorstand: Kadir Akin Dr. Michael Höhnerbach Vorsitzender des Aufsichtsrates: Hans Kristian Langva Registergericht: Amtsgericht Köln Registernummer: HRB 52 497 This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. e-mails sent over the internet may have been written under a wrong name or been manipulated. That is why this message sent as an e-mail is not a legally binding declaration of intention. Collogia Unternehmensberatung AG Ubierring 11 D-50678 Köln executive board: Kadir Akin Dr. Michael Höhnerbach President of the supervisory board: Hans Kristian Langva Registry office: district court Cologne Register number: HRB 52 497 **************************************************************************** ------=_NextPartTM-000-057c70dd-2d57-48b9-ba52-0c304c9b02d3--

Salü Markus Yes, i saw also some postings arround, but no solution and all of them was a bit different than our situation. The VM is always responsible, as long as we don't shut it down. I need to have a focus on it, but I think when I do it like this: delete all snapshots moving 1st Disk -> delete snapshot moving 2nd Disk -> delete snapshot moving 3rd Disk -> delete snapshot I don't get in troubles, than I move all the disk and then I remove all the snapshots. Maybe I can deliver more logs to help to solve the problem? gruss ivo On 03.05.2016 20:12, Markus Stockhausen wrote:
Hi,
we've seen a lot of trouble when deleting snapshots of multidisk VMs. Especially endless running snapshot deletions is one of our culprits. More here: https://bugzilla.redhat.com/show_bug.cgi?id=1319400
Markus ________________________________________ Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von "Ivo Rütsche [lists.ovirt.org@direct.ruetsche.com] Gesendet: Dienstag, 3. Mai 2016 19:23 An: users@ovirt.org Betreff: [ovirt-users] Snapshot is "Locked" - Can't do anything with the VM
Hi oVirt List
We have the problem, that two of our VM's have "Locked" snapshots and we can't go forward. From 10 VM's which we move from one store to another, we have problems with 5 of them to delete the snapshots. But with a lot of luck, the snapshot was always on the dedicated Swap-Disk, so we just have to remove the whole disk and create it again as new.
Because we replace all the filers, we have to move the VM's arround. The move itself runns fine, but deleting the snapshots doesn't work sometimes.
I found a lot about our problem, but I can't find any solution, which works an our environment.
Our setup: - 3 Solaris filer with NFS exports for the hosts for the vm-images - 8 hosts (oVirt 3.6 / Centos 7.2.1511) - The engine host is located outside of the DC, connected via VPN
At the moment, one of the "problem-VM" is productive, the other should become productive in the next days, so we focus us to the 2nd VM, because we can play arround with them. The other one is startet up, but i have no idea, what happens, when i shut them down...
This is the status today of VM, which is not productive at the moment: - VM Status: Down - Snapshots OK: 2 - Snapshots Locked: 1 - In the snapshot tab, everything is grayed out (Create, Preview, Commit, Undo etc.) - No active tasks are displayed
Error message when i try to start the VM: "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for the operation to finish, and try again."
Something about the history of this VM: 1. We start the disk move from filer1 to filer3, totaly 4 drives, everything was fine 2. Deleting snapshots, i think one or two runns fine 3. Deleting the 2nd or 3rd snapshot runs two days without an end 4. Rebooting the Engine 5. No task anymore in the Tasks list 6. The status of the snapshot is "Locked", die Disk of this snapshot becoms "Illegal" and red 7. Shutdown the VM - no luck, i can't start it again --> "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for the operation to finish, and try again." 8. We move the disk back from filer3 to the filer1 9. The status of the snapshot is "Locked", die Disk of this snapshot becoms "OK" and green ...but i can't delete the snapshot or starting the VM :(
Today, i copy the image files to a dedicated KVM host, which is not involved in the oVirt environment and i merge the image manualy together: qemu-img convert -p -f qcow2 804ffcc3-446f-4c56-ba99-ab90c7951a6f -O raw Merged1
I mount the image and everything fine, all the changes from the last two days are now in a base image. I try to remove the whole disk in oVirt, but i still got the error message, that i have to wait. So for me, the next step is to create a new VM and migrate everything manualy, this will work, but can't be the solution for the other VM, which is productive and it have the same problem.
Some information about:
***************** [root@lin-ovirt7 images]# vdsm-tool dump-volume-chains d6e6656d-7aaf-4c5b-bb93-cd6134ed9654
Images volume chains (base volume first)
image: f3cfefd2-d467-41d5-a054-5b01f81a21b4
- a018521a-bd3a-4322-ac79-52d15e95639e status: OK, voltype: INTERNAL, format: RAW, legality: LEGAL, type: SPARSE
- 2e2f0b4a-b015-4b50-8d10-51726de460a9 status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, type: SPARSE
- 804ffcc3-446f-4c56-ba99-ab90c7951a6f status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE ***************** [root@lin-ovirt7 images]# ls -altrh f3cfefd2-d467-41d5-a054-5b01f81a21b4/ total 34G -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 a018521a-bd3a-4322-ac79-52d15e95639e.lease -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 2e2f0b4a-b015-4b50-8d10-51726de460a9.lease -rw-r--r--+ 1 vdsm kvm 260 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease -rw-rw----+ 1 vdsm kvm 300G 2. Mai 22:26 a018521a-bd3a-4322-ac79-52d15e95639e -rw-rw----+ 1 vdsm kvm 77M 2. Mai 22:27 2e2f0b4a-b015-4b50-8d10-51726de460a9 -rw-rw----+ 1 vdsm kvm 3.5G 2. Mai 22:34 804ffcc3-446f-4c56-ba99-ab90c7951a6f -rw-r--r--+ 1 vdsm kvm 364 2. Mai 22:34 a018521a-bd3a-4322-ac79-52d15e95639e.meta -rw-r--r--+ 1 vdsm kvm 264 2. Mai 22:34 2e2f0b4a-b015-4b50-8d10-51726de460a9.meta drwxr-xr-x+ 2 vdsm kvm 11 2. Mai 22:34 . drwxr-xr-x+ 50 vdsm kvm 50 3. Mai 14:30 .. [root@lin-ovirt7 images]# *****************
I attach some line from the engine.log, maybe someone can help, if you need more information everything is here.
gruss ivo

I just do a full clone of the whole VM and the clone is running fine and have the last data on all disks. gruss ivo On 03.05.2016 20:12, Markus Stockhausen wrote:
Hi,
we've seen a lot of trouble when deleting snapshots of multidisk VMs. Especially endless running snapshot deletions is one of our culprits. More here: https://bugzilla.redhat.com/show_bug.cgi?id=1319400
Markus ________________________________________ Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von "Ivo Rütsche [lists.ovirt.org@direct.ruetsche.com] Gesendet: Dienstag, 3. Mai 2016 19:23 An: users@ovirt.org Betreff: [ovirt-users] Snapshot is "Locked" - Can't do anything with the VM
Hi oVirt List
We have the problem, that two of our VM's have "Locked" snapshots and we can't go forward. From 10 VM's which we move from one store to another, we have problems with 5 of them to delete the snapshots. But with a lot of luck, the snapshot was always on the dedicated Swap-Disk, so we just have to remove the whole disk and create it again as new.
Because we replace all the filers, we have to move the VM's arround. The move itself runns fine, but deleting the snapshots doesn't work sometimes.
I found a lot about our problem, but I can't find any solution, which works an our environment.
Our setup: - 3 Solaris filer with NFS exports for the hosts for the vm-images - 8 hosts (oVirt 3.6 / Centos 7.2.1511) - The engine host is located outside of the DC, connected via VPN
At the moment, one of the "problem-VM" is productive, the other should become productive in the next days, so we focus us to the 2nd VM, because we can play arround with them. The other one is startet up, but i have no idea, what happens, when i shut them down...
This is the status today of VM, which is not productive at the moment: - VM Status: Down - Snapshots OK: 2 - Snapshots Locked: 1 - In the snapshot tab, everything is grayed out (Create, Preview, Commit, Undo etc.) - No active tasks are displayed
Error message when i try to start the VM: "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for the operation to finish, and try again."
Something about the history of this VM: 1. We start the disk move from filer1 to filer3, totaly 4 drives, everything was fine 2. Deleting snapshots, i think one or two runns fine 3. Deleting the 2nd or 3rd snapshot runs two days without an end 4. Rebooting the Engine 5. No task anymore in the Tasks list 6. The status of the snapshot is "Locked", die Disk of this snapshot becoms "Illegal" and red 7. Shutdown the VM - no luck, i can't start it again --> "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for the operation to finish, and try again." 8. We move the disk back from filer3 to the filer1 9. The status of the snapshot is "Locked", die Disk of this snapshot becoms "OK" and green ...but i can't delete the snapshot or starting the VM :(
Today, i copy the image files to a dedicated KVM host, which is not involved in the oVirt environment and i merge the image manualy together: qemu-img convert -p -f qcow2 804ffcc3-446f-4c56-ba99-ab90c7951a6f -O raw Merged1
I mount the image and everything fine, all the changes from the last two days are now in a base image. I try to remove the whole disk in oVirt, but i still got the error message, that i have to wait. So for me, the next step is to create a new VM and migrate everything manualy, this will work, but can't be the solution for the other VM, which is productive and it have the same problem.
Some information about:
***************** [root@lin-ovirt7 images]# vdsm-tool dump-volume-chains d6e6656d-7aaf-4c5b-bb93-cd6134ed9654
Images volume chains (base volume first)
image: f3cfefd2-d467-41d5-a054-5b01f81a21b4
- a018521a-bd3a-4322-ac79-52d15e95639e status: OK, voltype: INTERNAL, format: RAW, legality: LEGAL, type: SPARSE
- 2e2f0b4a-b015-4b50-8d10-51726de460a9 status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, type: SPARSE
- 804ffcc3-446f-4c56-ba99-ab90c7951a6f status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE ***************** [root@lin-ovirt7 images]# ls -altrh f3cfefd2-d467-41d5-a054-5b01f81a21b4/ total 34G -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 a018521a-bd3a-4322-ac79-52d15e95639e.lease -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 2e2f0b4a-b015-4b50-8d10-51726de460a9.lease -rw-r--r--+ 1 vdsm kvm 260 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease -rw-rw----+ 1 vdsm kvm 300G 2. Mai 22:26 a018521a-bd3a-4322-ac79-52d15e95639e -rw-rw----+ 1 vdsm kvm 77M 2. Mai 22:27 2e2f0b4a-b015-4b50-8d10-51726de460a9 -rw-rw----+ 1 vdsm kvm 3.5G 2. Mai 22:34 804ffcc3-446f-4c56-ba99-ab90c7951a6f -rw-r--r--+ 1 vdsm kvm 364 2. Mai 22:34 a018521a-bd3a-4322-ac79-52d15e95639e.meta -rw-r--r--+ 1 vdsm kvm 264 2. Mai 22:34 2e2f0b4a-b015-4b50-8d10-51726de460a9.meta drwxr-xr-x+ 2 vdsm kvm 11 2. Mai 22:34 . drwxr-xr-x+ 50 vdsm kvm 50 3. Mai 14:30 .. [root@lin-ovirt7 images]# *****************
I attach some line from the engine.log, maybe someone can help, if you need more information everything is here.
gruss ivo

This is a multi-part message in MIME format. --------------050805000902040007090400 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: quoted-printable Unlock procedure * QUERY : -q PGPASSWORD=3DXXXXX /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh= =20 -q -t snapshot -u engine 296c010e-3c1d-4008-84b3-5cd39cff6aa1 | 525a4dda-dbbb-4872-a5f1-8ac2aed483= 92 * REMOVE PGPASSWORD=3DXXXXX /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh= =20 -t snapshot -u engine 525a4dda-dbbb-4872-a5f1-8ac2aed48392 Ref : http://lists.ovirt.org/pipermail/users/2015-November/035686.html Le 03/05/2016 21:17, Ivo R=C3=BCtsche a =C3=A9crit :
I just do a full clone of the whole VM and the clone is running fine=20 and have the last data on all disks.
gruss ivo
On 03.05.2016 20:12, Markus Stockhausen wrote:
Hi,
we've seen a lot of trouble when deleting snapshots of multidisk VMs. Especially endless running snapshot deletions is one of our culprits. More here: https://bugzilla.redhat.com/show_bug.cgi?id=3D1319400
Markus ________________________________________ Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im=20 Auftrag von "Ivo R=C3=BCtsche [lists.ovirt.org@direct.ruetsche.co=
m]
Gesendet: Dienstag, 3. Mai 2016 19:23 An: users@ovirt.org Betreff: [ovirt-users] Snapshot is "Locked" - Can't do anything with=20 the VM
Hi oVirt List
We have the problem, that two of our VM's have "Locked" snapshots and = we can't go forward. From 10 VM's which we move from one store to another= , we have problems with 5 of them to delete the snapshots. But with a lo= t of luck, the snapshot was always on the dedicated Swap-Disk, so we jus= t have to remove the whole disk and create it again as new.
Because we replace all the filers, we have to move the VM's arround. T= he move itself runns fine, but deleting the snapshots doesn't work=20 sometimes.
I found a lot about our problem, but I can't find any solution, which works an our environment.
Our setup: - 3 Solaris filer with NFS exports for the hosts for the vm-images - 8 hosts (oVirt 3.6 / Centos 7.2.1511) - The engine host is located outside of the DC, connected via VPN
At the moment, one of the "problem-VM" is productive, the other should become productive in the next days, so we focus us to the 2nd VM, because we can play arround with them. The other one is startet up, bu= t i have no idea, what happens, when i shut them down...
This is the status today of VM, which is not productive at the moment: - VM Status: Down - Snapshots OK: 2 - Snapshots Locked: 1 - In the snapshot tab, everything is grayed out (Create, Preview, Commit, Undo etc.) - No active tasks are displayed
Error message when i try to start the VM: "Cannot run VM. The VM is performing an operation on a Snapshot. Pleas= e wait for the operation to finish, and try again."
Something about the history of this VM: 1. We start the disk move from filer1 to filer3, totaly 4 drives, everything was fine 2. Deleting snapshots, i think one or two runns fine 3. Deleting the 2nd or 3rd snapshot runs two days without an end 4. Rebooting the Engine 5. No task anymore in the Tasks list 6. The status of the snapshot is "Locked", die Disk of this snapshot becoms "Illegal" and red 7. Shutdown the VM - no luck, i can't start it again --> "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for t= he operation to finish, and try again." 8. We move the disk back from filer3 to the filer1 9. The status of the snapshot is "Locked", die Disk of this snapshot becoms "OK" and green ...but i can't delete the snapshot or starting the VM :(
Today, i copy the image files to a dedicated KVM host, which is not involved in the oVirt environment and i merge the image manualy=20 together: qemu-img convert -p -f qcow2 804ffcc3-446f-4c56-ba99-ab90c7951a6f -O r= aw Merged1
I mount the image and everything fine, all the changes from the last t= wo days are now in a base image. I try to remove the whole disk in oVirt, but i still got the error message, that i have to wait. So for me, the next step is to create a new VM and migrate everything manualy, this will work, but can't be the solution for the other VM, which is productive and it have the same problem.
Some information about:
***************** [root@lin-ovirt7 images]# vdsm-tool dump-volume-chains d6e6656d-7aaf-4c5b-bb93-cd6134ed9654
Images volume chains (base volume first)
image: f3cfefd2-d467-41d5-a054-5b01f81a21b4
- a018521a-bd3a-4322-ac79-52d15e95639e status: OK, voltype: INTERNAL, format: RAW, legality: LEGAL, type: SPARSE
- 2e2f0b4a-b015-4b50-8d10-51726de460a9 status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, type: SPARSE
- 804ffcc3-446f-4c56-ba99-ab90c7951a6f status: OK, voltype: LEAF, format: COW, legality:=20 LEGAL, type: SPARSE ***************** [root@lin-ovirt7 images]# ls -altrh=20 f3cfefd2-d467-41d5-a054-5b01f81a21b4/ total 34G -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 a018521a-bd3a-4322-ac79-52d15e95639e.lease -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 2e2f0b4a-b015-4b50-8d10-51726de460a9.lease -rw-r--r--+ 1 vdsm kvm 260 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease -rw-rw----+ 1 vdsm kvm 300G 2. Mai 22:26 a018521a-bd3a-4322-ac79-52d15e95639e -rw-rw----+ 1 vdsm kvm 77M 2. Mai 22:27 2e2f0b4a-b015-4b50-8d10-51726de460a9 -rw-rw----+ 1 vdsm kvm 3.5G 2. Mai 22:34 804ffcc3-446f-4c56-ba99-ab90c7951a6f -rw-r--r--+ 1 vdsm kvm 364 2. Mai 22:34 a018521a-bd3a-4322-ac79-52d15e95639e.meta -rw-r--r--+ 1 vdsm kvm 264 2. Mai 22:34 2e2f0b4a-b015-4b50-8d10-51726de460a9.meta drwxr-xr-x+ 2 vdsm kvm 11 2. Mai 22:34 . drwxr-xr-x+ 50 vdsm kvm 50 3. Mai 14:30 .. [root@lin-ovirt7 images]# *****************
I attach some line from the engine.log, maybe someone can help, if you need more information everything is here.
gruss ivo
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--=20 Nathana=C3=ABl Blanchet Supervision r=C3=A9seau P=C3=B4le Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 =09 T=C3=A9l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr --------------050805000902040007090400 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <html> <head> <meta content=3D"text/html; charset=3Dutf-8" http-equiv=3D"Content-Ty= pe"> </head> <body bgcolor=3D"#FFFFFF" text=3D"#000000"> Unlock procedure<br> <ul> <li>QUERY : -q</li> </ul> PGPASSWORD=3DXXXXX /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -q -t snapshot -u engine<br> 296c010e-3c1d-4008-84b3-5cd39cff6aa1 | 525a4dda-dbbb-4872-a5f1-8ac2aed48392<br> <ul> <li>REMOVE<br> </li> </ul> PGPASSWORD=3DXXXXX /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t snapshot -u engine 525a4dda-dbbb-4872-a5f1-8ac2aed48392<br> <br> Ref :<br> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/pipermail/users/2015-November/035686= .html">http://lists.ovirt.org/pipermail/users/2015-November/035686.html</= a><br> <br> <div class=3D"moz-cite-prefix">Le 03/05/2016 21:17, Ivo R=C3=BCtsche = a =C3=A9crit=C2=A0:<br> </div> <blockquote cite=3D"mid:5728F94C.8000404@direct.ruetsche.com" type=3D"cite"> <br> I just do a full clone of the whole VM and the clone is running fine and have the last data on all disks. <br> <br> gruss ivo <br> <br> <br> On 03.05.2016 20:12, Markus Stockhausen wrote: <br> <blockquote type=3D"cite">Hi, <br> <br> we've seen a lot of trouble when deleting snapshots of multidisk VMs. <br> Especially endless running snapshot deletions is one of our culprits. <br> More here: <a class=3D"moz-txt-link-freetext" href=3D"https://bug= zilla.redhat.com/show_bug.cgi?id=3D1319400">https://bugzilla.redhat.com/s= how_bug.cgi?id=3D1319400</a> <br> <br> Markus <br> ________________________________________ <br> Von: <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:users-b= ounces@ovirt.org">users-bounces@ovirt.org</a> [<a class=3D"moz-txt-link-a= bbreviated" href=3D"mailto:users-bounces@ovirt.org">users-bounces@ovirt.o= rg</a>]" im Auftrag von "Ivo R=C3=BCtsche [<a class=3D"moz-txt-link-abbreviated" href=3D"mailto:lists.ovirt= .org@direct.ruetsche.com">lists.ovirt.org@direct.ruetsche.com</a>] <br> Gesendet: Dienstag, 3. Mai 2016 19:23 <br> An: <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:users@ov= irt.org">users@ovirt.org</a> <br> Betreff: [ovirt-users] Snapshot is "Locked" - Can't do anything with the VM <br> <br> Hi oVirt List <br> <br> We have the problem, that two of our VM's have "Locked" snapshots and we <br> can't go forward. From 10 VM's which we move from one store to another, <br> we have problems with 5 of them to delete the snapshots. But with a lot <br> of luck, the snapshot was always on the dedicated Swap-Disk, so we just <br> have to remove the whole disk and create it again as new. <br> <br> Because we replace all the filers, we have to move the VM's arround. The <br> move itself runns fine, but deleting the snapshots doesn't work sometimes. <br> <br> I found a lot about our problem, but I can't find any solution, which <br> works an our environment. <br> <br> Our setup: <br> - 3 Solaris filer with NFS exports for the hosts for the vm-images <br> - 8 hosts (oVirt 3.6 / Centos 7.2.1511) <br> - The engine host is located outside of the DC, connected via VPN <br> <br> At the moment, one of the "problem-VM" is productive, the other should <br> become productive in the next days, so we focus us to the 2nd VM, <br> because we can play arround with them. The other one is startet up, but <br> i have no idea, what happens, when i shut them down... <br> <br> This is the status today of VM, which is not productive at the moment: <br> - VM Status: Down <br> - Snapshots OK: 2 <br> - Snapshots Locked: 1 <br> - In the snapshot tab, everything is grayed out (Create, Preview, <br> Commit, Undo etc.) <br> - No active tasks are displayed <br> <br> Error message when i try to start the VM: <br> "Cannot run VM. The VM is performing an operation on a Snapshot. Please <br> wait for the operation to finish, and try again." <br> <br> Something about the history of this VM: <br> 1. We start the disk move from filer1 to filer3, totaly 4 drives, <br> everything was fine <br> 2. Deleting snapshots, i think one or two runns fine <br> 3. Deleting the 2nd or 3rd snapshot runs two days without an end <br> 4. Rebooting the Engine <br> 5. No task anymore in the Tasks list <br> 6. The status of the snapshot is "Locked", die Disk of this snapshot <br> becoms "Illegal" and red <br> 7. Shutdown the VM -=C2=A0 no luck, i can't start it again --> "Cannot run <br> VM. The VM is performing an operation on a Snapshot. Please wait for the <br> operation to finish, and try again." <br> 8. We move the disk back from filer3 to the filer1 <br> 9. The status of the snapshot is "Locked", die Disk of this snapshot <br> becoms "OK" and green <br> ...but i can't delete the snapshot or starting the VM :( <br> <br> Today, i copy the image files to a dedicated KVM host, which is not <br> involved in the oVirt environment and i merge the image manualy together: <br> qemu-img convert -p -f qcow2 804ffcc3-446f-4c56-ba99-ab90c7951a6f -O raw <br> Merged1 <br> <br> I mount the image and everything fine, all the changes from the last two <br> days are now in a base image. I try to remove the whole disk in oVirt, <br> but i still got the error message, that i have to wait. So for me, the <br> next step is to create a new VM and migrate everything manualy, this <br> will work, but can't be the solution for the other VM, which is <br> productive and it have the same problem. <br> <br> <br> <br> Some information about: <br> <br> ***************** <br> [root@lin-ovirt7 images]# vdsm-tool dump-volume-chains <br> d6e6656d-7aaf-4c5b-bb93-cd6134ed9654 <br> <br> Images volume chains (base volume first) <br> <br> =C2=A0=C2=A0=C2=A0=C2=A0 image:=C2=A0=C2=A0=C2=A0 f3cfefd2-d467-4= 1d5-a054-5b01f81a21b4 <br> <br> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 - a018521a-bd3a-4322-ac79-52d15e95639e <br> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 status: OK, voltype: INTERNAL, format: RAW= , legality: <br> LEGAL, type: SPARSE <br> <br> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 - 2e2f0b4a-b015-4b50-8d10-51726de460a9 <br> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 status: OK, voltype: INTERNAL, format: COW= , legality: <br> LEGAL, type: SPARSE <br> <br> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 - 804ffcc3-446f-4c56-ba99-ab90c7951a6f <br> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 status: OK, voltype: LEAF, format: COW, legality: LEGAL, <br> type: SPARSE <br> ***************** <br> [root@lin-ovirt7 images]# ls -altrh f3cfefd2-d467-41d5-a054-5b01f81a21b4/ <br> total 34G <br> -rw-rw----+=C2=A0 1 vdsm kvm 1.0M=C2=A0 2. Mai 21:03 <br> a018521a-bd3a-4322-ac79-52d15e95639e.lease <br> -rw-rw----+=C2=A0 1 vdsm kvm 1.0M=C2=A0 2. Mai 21:03 <br> 2e2f0b4a-b015-4b50-8d10-51726de460a9.lease <br> -rw-r--r--+=C2=A0 1 vdsm kvm=C2=A0 260=C2=A0 2. Mai 21:03 <br> 804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta <br> -rw-rw----+=C2=A0 1 vdsm kvm 1.0M=C2=A0 2. Mai 21:03 <br> 804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease <br> -rw-rw----+=C2=A0 1 vdsm kvm 300G=C2=A0 2. Mai 22:26 <br> a018521a-bd3a-4322-ac79-52d15e95639e <br> -rw-rw----+=C2=A0 1 vdsm kvm=C2=A0 77M=C2=A0 2. Mai 22:27 <br> 2e2f0b4a-b015-4b50-8d10-51726de460a9 <br> -rw-rw----+=C2=A0 1 vdsm kvm 3.5G=C2=A0 2. Mai 22:34 <br> 804ffcc3-446f-4c56-ba99-ab90c7951a6f <br> -rw-r--r--+=C2=A0 1 vdsm kvm=C2=A0 364=C2=A0 2. Mai 22:34 <br> a018521a-bd3a-4322-ac79-52d15e95639e.meta <br> -rw-r--r--+=C2=A0 1 vdsm kvm=C2=A0 264=C2=A0 2. Mai 22:34 <br> 2e2f0b4a-b015-4b50-8d10-51726de460a9.meta <br> drwxr-xr-x+=C2=A0 2 vdsm kvm=C2=A0=C2=A0 11=C2=A0 2. Mai 22:34 . <br> drwxr-xr-x+ 50 vdsm kvm=C2=A0=C2=A0 50=C2=A0 3. Mai 14:30 .. <br> [root@lin-ovirt7 images]# <br> ***************** <br> <br> I attach some line from the engine.log, maybe someone can help, if you <br> need more information everything is here. <br> <br> gruss ivo <br> <br> <br> <br> </blockquote> <br> _______________________________________________ <br> Users mailing list <br> <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.or= g">Users@ovirt.org</a> <br> <a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/m= ailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> </blockquote> <br> <pre class=3D"moz-signature" cols=3D"72">--=20 Nathana=C3=ABl Blanchet Supervision r=C3=A9seau P=C3=B4le Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 =09 T=C3=A9l. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 <a class=3D"moz-txt-link-abbreviated" href=3D"mailto:blanchet@abes.fr">bl= anchet@abes.fr</a> </pre> </body> </html> --------------050805000902040007090400--

Hi Nathanaël I try it today and the snapshot was in the state "OK" after I execute the script. I try to remove the snapshot again, but it never do something. So after I click "Delete", i have this situation: Tasks(1): - (Hourglass) Removing Snapshot Auto-generated for Live Storage Migration of VM xxxx -- (Completed) Validating -- (Hourglass) Executing The snapshot is in the status "Locked" This status remains for ever. When I restart the engine, the Status is still in the status "Locked", but the task no longer exists. engine.log: 2016-05-18 16:33:07,532 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (default task-23) [53281b57] Lock Acquired to object 'EngineLock:{exclusiveLocks='[2acfe3fa-be69-46da-9add-c748b2bd6034=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}' 2016-05-18 16:33:08,640 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (default task-23) [53281b57] Running command: RemoveSnapshotCommand internal: false. Entities affected : ID: 2acfe3fa-be69-46da-9add-c748b2bd6034 Type: VMAction group MANIPULATE_VM_SNAPSHOTS with role type USER 2016-05-18 16:33:08,984 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (default task-23) [53281b57] Lock freed to object 'EngineLock:{exclusiveLocks='[2acfe3fa-be69-46da-9add-c748b2bd6034=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}' 2016-05-18 16:33:10,079 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-23) [] Correlation ID: 53281b57, Job ID: 58b18e63-2b7f-480c-84ea-260508aa8798, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Auto-generated for Live Storage Migration' deletion for VM 'xxxx' was initiated by admin@internal. ....but nothing else.... On the node side, i can't find anything in the logs (or i looking on the wrong place). When the VM is not running, I can remove the snapshot without any problems. The VM images are on a NFS export from a Solaris (ZFS). gruss Ivo On 04.05.2016 13:40, Nathanaël Blanchet wrote:
Unlock procedure
* QUERY : -q
PGPASSWORD=XXXXX /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -q -t snapshot -u engine 296c010e-3c1d-4008-84b3-5cd39cff6aa1 | 525a4dda-dbbb-4872-a5f1-8ac2aed48392
* REMOVE
PGPASSWORD=XXXXX /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t snapshot -u engine 525a4dda-dbbb-4872-a5f1-8ac2aed48392
Ref : http://lists.ovirt.org/pipermail/users/2015-November/035686.html
Le 03/05/2016 21:17, Ivo Rütsche a écrit :
I just do a full clone of the whole VM and the clone is running fine and have the last data on all disks.
gruss ivo
On 03.05.2016 20:12, Markus Stockhausen wrote:
Hi,
we've seen a lot of trouble when deleting snapshots of multidisk VMs. Especially endless running snapshot deletions is one of our culprits. More here: https://bugzilla.redhat.com/show_bug.cgi?id=1319400
Markus ________________________________________ Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von "Ivo Rütsche [lists.ovirt.org@direct.ruetsche.com] Gesendet: Dienstag, 3. Mai 2016 19:23 An: users@ovirt.org Betreff: [ovirt-users] Snapshot is "Locked" - Can't do anything with the VM
Hi oVirt List
We have the problem, that two of our VM's have "Locked" snapshots and we can't go forward. From 10 VM's which we move from one store to another, we have problems with 5 of them to delete the snapshots. But with a lot of luck, the snapshot was always on the dedicated Swap-Disk, so we just have to remove the whole disk and create it again as new.
Because we replace all the filers, we have to move the VM's arround. The move itself runns fine, but deleting the snapshots doesn't work sometimes.
I found a lot about our problem, but I can't find any solution, which works an our environment.
Our setup: - 3 Solaris filer with NFS exports for the hosts for the vm-images - 8 hosts (oVirt 3.6 / Centos 7.2.1511) - The engine host is located outside of the DC, connected via VPN
At the moment, one of the "problem-VM" is productive, the other should become productive in the next days, so we focus us to the 2nd VM, because we can play arround with them. The other one is startet up, but i have no idea, what happens, when i shut them down...
This is the status today of VM, which is not productive at the moment: - VM Status: Down - Snapshots OK: 2 - Snapshots Locked: 1 - In the snapshot tab, everything is grayed out (Create, Preview, Commit, Undo etc.) - No active tasks are displayed
Error message when i try to start the VM: "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for the operation to finish, and try again."
Something about the history of this VM: 1. We start the disk move from filer1 to filer3, totaly 4 drives, everything was fine 2. Deleting snapshots, i think one or two runns fine 3. Deleting the 2nd or 3rd snapshot runs two days without an end 4. Rebooting the Engine 5. No task anymore in the Tasks list 6. The status of the snapshot is "Locked", die Disk of this snapshot becoms "Illegal" and red 7. Shutdown the VM - no luck, i can't start it again --> "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for the operation to finish, and try again." 8. We move the disk back from filer3 to the filer1 9. The status of the snapshot is "Locked", die Disk of this snapshot becoms "OK" and green ...but i can't delete the snapshot or starting the VM :(
Today, i copy the image files to a dedicated KVM host, which is not involved in the oVirt environment and i merge the image manualy together: qemu-img convert -p -f qcow2 804ffcc3-446f-4c56-ba99-ab90c7951a6f -O raw Merged1
I mount the image and everything fine, all the changes from the last two days are now in a base image. I try to remove the whole disk in oVirt, but i still got the error message, that i have to wait. So for me, the next step is to create a new VM and migrate everything manualy, this will work, but can't be the solution for the other VM, which is productive and it have the same problem.
Some information about:
***************** [root@lin-ovirt7 images]# vdsm-tool dump-volume-chains d6e6656d-7aaf-4c5b-bb93-cd6134ed9654
Images volume chains (base volume first)
image: f3cfefd2-d467-41d5-a054-5b01f81a21b4
- a018521a-bd3a-4322-ac79-52d15e95639e status: OK, voltype: INTERNAL, format: RAW, legality: LEGAL, type: SPARSE
- 2e2f0b4a-b015-4b50-8d10-51726de460a9 status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, type: SPARSE
- 804ffcc3-446f-4c56-ba99-ab90c7951a6f status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE ***************** [root@lin-ovirt7 images]# ls -altrh f3cfefd2-d467-41d5-a054-5b01f81a21b4/ total 34G -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 a018521a-bd3a-4322-ac79-52d15e95639e.lease -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 2e2f0b4a-b015-4b50-8d10-51726de460a9.lease -rw-r--r--+ 1 vdsm kvm 260 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease -rw-rw----+ 1 vdsm kvm 300G 2. Mai 22:26 a018521a-bd3a-4322-ac79-52d15e95639e -rw-rw----+ 1 vdsm kvm 77M 2. Mai 22:27 2e2f0b4a-b015-4b50-8d10-51726de460a9 -rw-rw----+ 1 vdsm kvm 3.5G 2. Mai 22:34 804ffcc3-446f-4c56-ba99-ab90c7951a6f -rw-r--r--+ 1 vdsm kvm 364 2. Mai 22:34 a018521a-bd3a-4322-ac79-52d15e95639e.meta -rw-r--r--+ 1 vdsm kvm 264 2. Mai 22:34 2e2f0b4a-b015-4b50-8d10-51726de460a9.meta drwxr-xr-x+ 2 vdsm kvm 11 2. Mai 22:34 . drwxr-xr-x+ 50 vdsm kvm 50 3. Mai 14:30 .. [root@lin-ovirt7 images]# *****************
I attach some line from the engine.log, maybe someone can help, if you need more information everything is here.
gruss ivo
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nathanaël Blanchet
Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr

Le 18/05/2016 16:50, Ivo Rütsche a écrit :
Hi Nathanaël
I try it today and the snapshot was in the state "OK" after I execute the script. I try to remove the snapshot again, but it never do something. So after I click "Delete", i have this situation:
Tasks(1): - (Hourglass) Removing Snapshot Auto-generated for Live Storage Migration of VM xxxx -- (Completed) Validating -- (Hourglass) Executing
The snapshot is in the status "Locked"
This status remains for ever.
When I restart the engine, the Status is still in the status "Locked", but the task no longer exists.
engine.log: 2016-05-18 16:33:07,532 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (default task-23) [53281b57] Lock Acquired to object 'EngineLock:{exclusiveLocks='[2acfe3fa-be69-46da-9add-c748b2bd6034=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-05-18 16:33:08,640 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (default task-23) [53281b57] Running command: RemoveSnapshotCommand internal: false. Entities affected : ID: 2acfe3fa-be69-46da-9add-c748b2bd6034 Type: VMAction group MANIPULATE_VM_SNAPSHOTS with role type USER
2016-05-18 16:33:08,984 INFO [org.ovirt.engine.core.bll.RemoveSnapshotCommand] (default task-23) [53281b57] Lock freed to object 'EngineLock:{exclusiveLocks='[2acfe3fa-be69-46da-9add-c748b2bd6034=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-05-18 16:33:10,079 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-23) [] Correlation ID: 53281b57, Job ID: 58b18e63-2b7f-480c-84ea-260508aa8798, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Auto-generated for Live Storage Migration' deletion for VM 'xxxx' was initiated by admin@internal.
....but nothing else....
On the node side, i can't find anything in the logs (or i looking on the wrong place).
When the VM is not running, I can remove the snapshot without any problems. The VM images are on a NFS export from a Solaris (ZFS).
gruss Ivo
On 04.05.2016 13:40, Nathanaël Blanchet wrote:
Unlock procedure
* QUERY : -q
PGPASSWORD=XXXXX /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -q -t snapshot -u engine 296c010e-3c1d-4008-84b3-5cd39cff6aa1 | 525a4dda-dbbb-4872-a5f1-8ac2aed48392
* REMOVE
PGPASSWORD=XXXXX /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t snapshot -u engine 525a4dda-dbbb-4872-a5f1-8ac2aed48392
Ref : http://lists.ovirt.org/pipermail/users/2015-November/035686.html
Le 03/05/2016 21:17, Ivo Rütsche a écrit :
I just do a full clone of the whole VM and the clone is running fine and have the last data on all disks.
gruss ivo
On 03.05.2016 20:12, Markus Stockhausen wrote:
Hi,
we've seen a lot of trouble when deleting snapshots of multidisk VMs. Especially endless running snapshot deletions is one of our culprits. More here: https://bugzilla.redhat.com/show_bug.cgi?id=1319400
Markus ________________________________________ Von: users-bounces@ovirt.org [users-bounces@ovirt.org]" im Auftrag von "Ivo Rütsche [lists.ovirt.org@direct.ruetsche.com] Gesendet: Dienstag, 3. Mai 2016 19:23 An: users@ovirt.org Betreff: [ovirt-users] Snapshot is "Locked" - Can't do anything with the VM
Hi oVirt List
We have the problem, that two of our VM's have "Locked" snapshots and we can't go forward. From 10 VM's which we move from one store to another, we have problems with 5 of them to delete the snapshots. But with a lot of luck, the snapshot was always on the dedicated Swap-Disk, so we just have to remove the whole disk and create it again as new.
Because we replace all the filers, we have to move the VM's arround. The move itself runns fine, but deleting the snapshots doesn't work sometimes.
I found a lot about our problem, but I can't find any solution, which works an our environment.
Our setup: - 3 Solaris filer with NFS exports for the hosts for the vm-images - 8 hosts (oVirt 3.6 / Centos 7.2.1511) - The engine host is located outside of the DC, connected via VPN
At the moment, one of the "problem-VM" is productive, the other should become productive in the next days, so we focus us to the 2nd VM, because we can play arround with them. The other one is startet up, but i have no idea, what happens, when i shut them down...
This is the status today of VM, which is not productive at the moment: - VM Status: Down - Snapshots OK: 2 - Snapshots Locked: 1 - In the snapshot tab, everything is grayed out (Create, Preview, Commit, Undo etc.) - No active tasks are displayed
Error message when i try to start the VM: "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for the operation to finish, and try again."
This often produces to me too. The only way I found to resolve it is to restart engine, you may try this, it has no impact on the host and vms.
Something about the history of this VM: 1. We start the disk move from filer1 to filer3, totaly 4 drives, everything was fine 2. Deleting snapshots, i think one or two runns fine 3. Deleting the 2nd or 3rd snapshot runs two days without an end 4. Rebooting the Engine 5. No task anymore in the Tasks list 6. The status of the snapshot is "Locked", die Disk of this snapshot becoms "Illegal" and red 7. Shutdown the VM - no luck, i can't start it again --> "Cannot run VM. The VM is performing an operation on a Snapshot. Please wait for the operation to finish, and try again." 8. We move the disk back from filer3 to the filer1 9. The status of the snapshot is "Locked", die Disk of this snapshot becoms "OK" and green ...but i can't delete the snapshot or starting the VM :(
Today, i copy the image files to a dedicated KVM host, which is not involved in the oVirt environment and i merge the image manualy together: qemu-img convert -p -f qcow2 804ffcc3-446f-4c56-ba99-ab90c7951a6f -O raw Merged1
I mount the image and everything fine, all the changes from the last two days are now in a base image. I try to remove the whole disk in oVirt, but i still got the error message, that i have to wait. So for me, the next step is to create a new VM and migrate everything manualy, this will work, but can't be the solution for the other VM, which is productive and it have the same problem.
Some information about:
***************** [root@lin-ovirt7 images]# vdsm-tool dump-volume-chains d6e6656d-7aaf-4c5b-bb93-cd6134ed9654
Images volume chains (base volume first)
image: f3cfefd2-d467-41d5-a054-5b01f81a21b4
- a018521a-bd3a-4322-ac79-52d15e95639e status: OK, voltype: INTERNAL, format: RAW, legality: LEGAL, type: SPARSE
- 2e2f0b4a-b015-4b50-8d10-51726de460a9 status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, type: SPARSE
- 804ffcc3-446f-4c56-ba99-ab90c7951a6f status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE ***************** [root@lin-ovirt7 images]# ls -altrh f3cfefd2-d467-41d5-a054-5b01f81a21b4/ total 34G -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 a018521a-bd3a-4322-ac79-52d15e95639e.lease -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 2e2f0b4a-b015-4b50-8d10-51726de460a9.lease -rw-r--r--+ 1 vdsm kvm 260 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03 804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease -rw-rw----+ 1 vdsm kvm 300G 2. Mai 22:26 a018521a-bd3a-4322-ac79-52d15e95639e -rw-rw----+ 1 vdsm kvm 77M 2. Mai 22:27 2e2f0b4a-b015-4b50-8d10-51726de460a9 -rw-rw----+ 1 vdsm kvm 3.5G 2. Mai 22:34 804ffcc3-446f-4c56-ba99-ab90c7951a6f -rw-r--r--+ 1 vdsm kvm 364 2. Mai 22:34 a018521a-bd3a-4322-ac79-52d15e95639e.meta -rw-r--r--+ 1 vdsm kvm 264 2. Mai 22:34 2e2f0b4a-b015-4b50-8d10-51726de460a9.meta drwxr-xr-x+ 2 vdsm kvm 11 2. Mai 22:34 . drwxr-xr-x+ 50 vdsm kvm 50 3. Mai 14:30 .. [root@lin-ovirt7 images]# *****************
I attach some line from the engine.log, maybe someone can help, if you need more information everything is here.
gruss ivo
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Nathanaël Blanchet
Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
-- Nathanaël Blanchet Supervision réseau Pôle Infrastrutures Informatiques 227 avenue Professeur-Jean-Louis-Viala 34193 MONTPELLIER CEDEX 5 Tél. 33 (0)4 67 54 84 55 Fax 33 (0)4 67 54 84 14 blanchet@abes.fr
participants (3)
-
Ivo Rütsche
-
Markus Stockhausen
-
Nathanaël Blanchet