This is a multi-part message in MIME format.
--------------050805000902040007090400
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable
Unlock procedure
* QUERY : -q
PGPASSWORD=3DXXXXX /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh=
=20
-q -t snapshot -u engine
296c010e-3c1d-4008-84b3-5cd39cff6aa1 | 525a4dda-dbbb-4872-a5f1-8ac2aed483=
92
* REMOVE
PGPASSWORD=3DXXXXX /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh=
=20
-t snapshot -u engine 525a4dda-dbbb-4872-a5f1-8ac2aed48392
Ref :
http://lists.ovirt.org/pipermail/users/2015-November/035686.html
Le 03/05/2016 21:17, Ivo R=C3=BCtsche a =C3=A9crit :
I just do a full clone of the whole VM and the clone is running fine=20
and have the last data on all disks.
gruss ivo
On 03.05.2016 20:12, Markus Stockhausen wrote:
> Hi,
>
> we've seen a lot of trouble when deleting snapshots of multidisk VMs.
> Especially endless running snapshot deletions is one of our culprits.
> More here:
https://bugzilla.redhat.com/show_bug.cgi?id=3D1319400
>
> Markus
> ________________________________________
> Von: users-bounces(a)ovirt.org [users-bounces(a)ovirt.org]" im=20
> Auftrag von "Ivo R=C3=BCtsche [lists.ovirt.org(a)direct.ruetsche.co=
m]
> Gesendet: Dienstag, 3. Mai 2016 19:23
> An: users(a)ovirt.org
> Betreff: [ovirt-users] Snapshot is "Locked" - Can't do anything
with=20
> the VM
>
> Hi oVirt List
>
> We have the problem, that two of our VM's have "Locked" snapshots and =
we
> can't go forward. From 10 VM's which we move from one
store to another=
,
> we have problems with 5 of them to delete the snapshots. But with
a lo=
t
> of luck, the snapshot was always on the dedicated Swap-Disk, so
we jus=
t
> have to remove the whole disk and create it again as new.
>
> Because we replace all the filers, we have to move the VM's arround. T=
he
> move itself runns fine, but deleting the snapshots doesn't
work=20
> sometimes.
>
> I found a lot about our problem, but I can't find any solution, which
> works an our environment.
>
> Our setup:
> - 3 Solaris filer with NFS exports for the hosts for the vm-images
> - 8 hosts (oVirt 3.6 / Centos 7.2.1511)
> - The engine host is located outside of the DC, connected via VPN
>
> At the moment, one of the "problem-VM" is productive, the other should
> become productive in the next days, so we focus us to the 2nd VM,
> because we can play arround with them. The other one is startet up, bu=
t
> i have no idea, what happens, when i shut them down...
>
> This is the status today of VM, which is not productive at the moment:
> - VM Status: Down
> - Snapshots OK: 2
> - Snapshots Locked: 1
> - In the snapshot tab, everything is grayed out (Create, Preview,
> Commit, Undo etc.)
> - No active tasks are displayed
>
> Error message when i try to start the VM:
> "Cannot run VM. The VM is performing an operation on a Snapshot. Pleas=
e
> wait for the operation to finish, and try again."
>
> Something about the history of this VM:
> 1. We start the disk move from filer1 to filer3, totaly 4 drives,
> everything was fine
> 2. Deleting snapshots, i think one or two runns fine
> 3. Deleting the 2nd or 3rd snapshot runs two days without an end
> 4. Rebooting the Engine
> 5. No task anymore in the Tasks list
> 6. The status of the snapshot is "Locked", die Disk of this snapshot
> becoms "Illegal" and red
> 7. Shutdown the VM - no luck, i can't start it again --> "Cannot run
> VM. The VM is performing an operation on a Snapshot. Please wait for t=
he
> operation to finish, and try again."
> 8. We move the disk back from filer3 to the filer1
> 9. The status of the snapshot is "Locked", die Disk of this snapshot
> becoms "OK" and green
> ...but i can't delete the snapshot or starting the VM :(
>
> Today, i copy the image files to a dedicated KVM host, which is not
> involved in the oVirt environment and i merge the image manualy=20
> together:
> qemu-img convert -p -f qcow2 804ffcc3-446f-4c56-ba99-ab90c7951a6f -O r=
aw
> Merged1
>
> I mount the image and everything fine, all the changes from the last t=
wo
> days are now in a base image. I try to remove the whole disk in
oVirt,
> but i still got the error message, that i have to wait. So for me, the
> next step is to create a new VM and migrate everything manualy, this
> will work, but can't be the solution for the other VM, which is
> productive and it have the same problem.
>
>
>
> Some information about:
>
> *****************
> [root@lin-ovirt7 images]# vdsm-tool dump-volume-chains
> d6e6656d-7aaf-4c5b-bb93-cd6134ed9654
>
> Images volume chains (base volume first)
>
> image: f3cfefd2-d467-41d5-a054-5b01f81a21b4
>
> - a018521a-bd3a-4322-ac79-52d15e95639e
> status: OK, voltype: INTERNAL, format: RAW, legality:
> LEGAL, type: SPARSE
>
> - 2e2f0b4a-b015-4b50-8d10-51726de460a9
> status: OK, voltype: INTERNAL, format: COW, legality:
> LEGAL, type: SPARSE
>
> - 804ffcc3-446f-4c56-ba99-ab90c7951a6f
> status: OK, voltype: LEAF, format: COW, legality:=20
> LEGAL,
> type: SPARSE
> *****************
> [root@lin-ovirt7 images]# ls -altrh=20
> f3cfefd2-d467-41d5-a054-5b01f81a21b4/
> total 34G
> -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03
> a018521a-bd3a-4322-ac79-52d15e95639e.lease
> -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03
> 2e2f0b4a-b015-4b50-8d10-51726de460a9.lease
> -rw-r--r--+ 1 vdsm kvm 260 2. Mai 21:03
> 804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta
> -rw-rw----+ 1 vdsm kvm 1.0M 2. Mai 21:03
> 804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease
> -rw-rw----+ 1 vdsm kvm 300G 2. Mai 22:26
> a018521a-bd3a-4322-ac79-52d15e95639e
> -rw-rw----+ 1 vdsm kvm 77M 2. Mai 22:27
> 2e2f0b4a-b015-4b50-8d10-51726de460a9
> -rw-rw----+ 1 vdsm kvm 3.5G 2. Mai 22:34
> 804ffcc3-446f-4c56-ba99-ab90c7951a6f
> -rw-r--r--+ 1 vdsm kvm 364 2. Mai 22:34
> a018521a-bd3a-4322-ac79-52d15e95639e.meta
> -rw-r--r--+ 1 vdsm kvm 264 2. Mai 22:34
> 2e2f0b4a-b015-4b50-8d10-51726de460a9.meta
> drwxr-xr-x+ 2 vdsm kvm 11 2. Mai 22:34 .
> drwxr-xr-x+ 50 vdsm kvm 50 3. Mai 14:30 ..
> [root@lin-ovirt7 images]#
> *****************
>
> I attach some line from the engine.log, maybe someone can help, if you
> need more information everything is here.
>
> gruss ivo
>
>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--=20
Nathana=C3=ABl Blanchet
Supervision r=C3=A9seau
P=C3=B4le Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5 =09
T=C3=A9l. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
--------------050805000902040007090400
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta content=3D"text/html; charset=3Dutf-8"
http-equiv=3D"Content-Ty=
pe">
</head>
<body bgcolor=3D"#FFFFFF" text=3D"#000000">
Unlock procedure<br>
<ul>
<li>QUERY : -q</li>
</ul>
PGPASSWORD=3DXXXXX
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -q -t
snapshot -u engine<br>
296c010e-3c1d-4008-84b3-5cd39cff6aa1 |
525a4dda-dbbb-4872-a5f1-8ac2aed48392<br>
<ul>
<li>REMOVE<br>
</li>
</ul>
PGPASSWORD=3DXXXXX
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t snapshot
-u engine 525a4dda-dbbb-4872-a5f1-8ac2aed48392<br>
<br>
Ref :<br>
<a class=3D"moz-txt-link-freetext"
href=3D"http://lists.ovirt.org/pipermail/users/2015-November/035686=
.html">http://lists.ovirt.org/pipermail/users/2015-November/035686.html</=
a><br>
<br>
<div class=3D"moz-cite-prefix">Le 03/05/2016 21:17, Ivo R=C3=BCtsche
=
a
=C3=A9crit=C2=A0:<br>
</div>
<blockquote cite=3D"mid:5728F94C.8000404@direct.ruetsche.com"
type=3D"cite">
<br>
I just do a full clone of the whole VM and the clone is running
fine and have the last data on all disks.
<br>
<br>
gruss ivo
<br>
<br>
<br>
On 03.05.2016 20:12, Markus Stockhausen wrote:
<br>
<blockquote type=3D"cite">Hi,
<br>
<br>
we've seen a lot of trouble when deleting snapshots of multidisk
VMs.
<br>
Especially endless running snapshot deletions is one of our
culprits.
<br>
More here: <a class=3D"moz-txt-link-freetext"
href=3D"https://bug=
zilla.redhat.com/show_bug.cgi?id=3D1319400">https://bugzilla.redh...
how_bug.cgi?id=3D1319400</a>
<br>
<br>
Markus
<br>
________________________________________
<br>
Von: <a class=3D"moz-txt-link-abbreviated"
href=3D"mailto:users-b=
ounces@ovirt.org">users-bounces(a)ovirt.org</a> [<a
class=3D"moz-txt-link-a=
bbreviated"
href=3D"mailto:users-bounces@ovirt.org">users-bounces@ovirt.o=
rg</a>]&quot;
im Auftrag von &quot;Ivo R=C3=BCtsche
[<a class=3D"moz-txt-link-abbreviated"
href=3D"mailto:lists.ovirt=
.org@direct.ruetsche.com">lists.ovirt.org(a)direct.ruetsche.com</a>]
<br>
Gesendet: Dienstag, 3. Mai 2016 19:23
<br>
An: <a class=3D"moz-txt-link-abbreviated"
href=3D"mailto:users@ov=
irt.org">users(a)ovirt.org</a>
<br>
Betreff: [ovirt-users] Snapshot is "Locked" - Can't do anything
with the VM
<br>
<br>
Hi oVirt List
<br>
<br>
We have the problem, that two of our VM's have "Locked"
snapshots and we
<br>
can't go forward. From 10 VM's which we move from one store to
another,
<br>
we have problems with 5 of them to delete the snapshots. But
with a lot
<br>
of luck, the snapshot was always on the dedicated Swap-Disk, so
we just
<br>
have to remove the whole disk and create it again as new.
<br>
<br>
Because we replace all the filers, we have to move the VM's
arround. The
<br>
move itself runns fine, but deleting the snapshots doesn't work
sometimes.
<br>
<br>
I found a lot about our problem, but I can't find any solution,
which
<br>
works an our environment.
<br>
<br>
Our setup:
<br>
- 3 Solaris filer with NFS exports for the hosts for the
vm-images
<br>
- 8 hosts (oVirt 3.6 / Centos 7.2.1511)
<br>
- The engine host is located outside of the DC, connected via
VPN
<br>
<br>
At the moment, one of the "problem-VM" is productive, the other
should
<br>
become productive in the next days, so we focus us to the 2nd
VM,
<br>
because we can play arround with them. The other one is startet
up, but
<br>
i have no idea, what happens, when i shut them down...
<br>
<br>
This is the status today of VM, which is not productive at the
moment:
<br>
- VM Status: Down
<br>
- Snapshots OK: 2
<br>
- Snapshots Locked: 1
<br>
- In the snapshot tab, everything is grayed out (Create,
Preview,
<br>
Commit, Undo etc.)
<br>
- No active tasks are displayed
<br>
<br>
Error message when i try to start the VM:
<br>
"Cannot run VM. The VM is performing an operation on a Snapshot.
Please
<br>
wait for the operation to finish, and try again."
<br>
<br>
Something about the history of this VM:
<br>
1. We start the disk move from filer1 to filer3, totaly 4
drives,
<br>
everything was fine
<br>
2. Deleting snapshots, i think one or two runns fine
<br>
3. Deleting the 2nd or 3rd snapshot runs two days without an end
<br>
4. Rebooting the Engine
<br>
5. No task anymore in the Tasks list
<br>
6. The status of the snapshot is "Locked", die Disk of this
snapshot
<br>
becoms "Illegal" and red
<br>
7. Shutdown the VM -=C2=A0 no luck, i can't start it again -->
"Cannot run
<br>
VM. The VM is performing an operation on a Snapshot. Please wait
for the
<br>
operation to finish, and try again."
<br>
8. We move the disk back from filer3 to the filer1
<br>
9. The status of the snapshot is "Locked", die Disk of this
snapshot
<br>
becoms "OK" and green
<br>
...but i can't delete the snapshot or starting the VM :(
<br>
<br>
Today, i copy the image files to a dedicated KVM host, which is
not
<br>
involved in the oVirt environment and i merge the image manualy
together:
<br>
qemu-img convert -p -f qcow2
804ffcc3-446f-4c56-ba99-ab90c7951a6f -O raw
<br>
Merged1
<br>
<br>
I mount the image and everything fine, all the changes from the
last two
<br>
days are now in a base image. I try to remove the whole disk in
oVirt,
<br>
but i still got the error message, that i have to wait. So for
me, the
<br>
next step is to create a new VM and migrate everything manualy,
this
<br>
will work, but can't be the solution for the other VM, which is
<br>
productive and it have the same problem.
<br>
<br>
<br>
<br>
Some information about:
<br>
<br>
*****************
<br>
[root@lin-ovirt7 images]# vdsm-tool dump-volume-chains
<br>
d6e6656d-7aaf-4c5b-bb93-cd6134ed9654
<br>
<br>
Images volume chains (base volume first)
<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0 image:=C2=A0=C2=A0=C2=A0 f3cfefd2-d467-4=
1d5-a054-5b01f81a21b4
<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 - a018521a-bd3a-4322-ac79-52d15e95639e
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 status: OK, voltype: INTERNAL, format: RAW=
,
legality:
<br>
LEGAL, type: SPARSE
<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 - 2e2f0b4a-b015-4b50-8d10-51726de460a9
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 status: OK, voltype: INTERNAL, format: COW=
,
legality:
<br>
LEGAL, type: SPARSE
<br>
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 - 804ffcc3-446f-4c56-ba99-ab90c7951a6f
<br>
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 status: OK, voltype: LEAF, format: COW,
legality: LEGAL,
<br>
type: SPARSE
<br>
*****************
<br>
[root@lin-ovirt7 images]# ls -altrh
f3cfefd2-d467-41d5-a054-5b01f81a21b4/
<br>
total 34G
<br>
-rw-rw----+=C2=A0 1 vdsm kvm 1.0M=C2=A0 2. Mai 21:03
<br>
a018521a-bd3a-4322-ac79-52d15e95639e.lease
<br>
-rw-rw----+=C2=A0 1 vdsm kvm 1.0M=C2=A0 2. Mai 21:03
<br>
2e2f0b4a-b015-4b50-8d10-51726de460a9.lease
<br>
-rw-r--r--+=C2=A0 1 vdsm kvm=C2=A0 260=C2=A0 2. Mai 21:03
<br>
804ffcc3-446f-4c56-ba99-ab90c7951a6f.meta
<br>
-rw-rw----+=C2=A0 1 vdsm kvm 1.0M=C2=A0 2. Mai 21:03
<br>
804ffcc3-446f-4c56-ba99-ab90c7951a6f.lease
<br>
-rw-rw----+=C2=A0 1 vdsm kvm 300G=C2=A0 2. Mai 22:26
<br>
a018521a-bd3a-4322-ac79-52d15e95639e
<br>
-rw-rw----+=C2=A0 1 vdsm kvm=C2=A0 77M=C2=A0 2. Mai 22:27
<br>
2e2f0b4a-b015-4b50-8d10-51726de460a9
<br>
-rw-rw----+=C2=A0 1 vdsm kvm 3.5G=C2=A0 2. Mai 22:34
<br>
804ffcc3-446f-4c56-ba99-ab90c7951a6f
<br>
-rw-r--r--+=C2=A0 1 vdsm kvm=C2=A0 364=C2=A0 2. Mai 22:34
<br>
a018521a-bd3a-4322-ac79-52d15e95639e.meta
<br>
-rw-r--r--+=C2=A0 1 vdsm kvm=C2=A0 264=C2=A0 2. Mai 22:34
<br>
2e2f0b4a-b015-4b50-8d10-51726de460a9.meta
<br>
drwxr-xr-x+=C2=A0 2 vdsm kvm=C2=A0=C2=A0 11=C2=A0 2. Mai 22:34 .
<br>
drwxr-xr-x+ 50 vdsm kvm=C2=A0=C2=A0 50=C2=A0 3. Mai 14:30 ..
<br>
[root@lin-ovirt7 images]#
<br>
*****************
<br>
<br>
I attach some line from the engine.log, maybe someone can help,
if you
<br>
need more information everything is here.
<br>
<br>
gruss ivo
<br>
<br>
<br>
<br>
</blockquote>
<br>
_______________________________________________
<br>
Users mailing list
<br>
<a class=3D"moz-txt-link-abbreviated"
href=3D"mailto:Users@ovirt.or=
g">Users(a)ovirt.org</a>
<br>
<a class=3D"moz-txt-link-freetext"
href=3D"http://lists.ovirt.org/m=
ailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/us...
<br>
</blockquote>
<br>
<pre class=3D"moz-signature" cols=3D"72">--=20
Nathana=C3=ABl Blanchet
Supervision r=C3=A9seau
P=C3=B4le Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5 =09
T=C3=A9l. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
<a class=3D"moz-txt-link-abbreviated"
href=3D"mailto:blanchet@abes.fr">bl=
anchet(a)abes.fr</a> </pre>
</body>
</html>
--------------050805000902040007090400--