Hi
We have the same issue as you, and we are also using vProtect.
I have no solution, but I'm very interested in how to address this.
Some VM:s we do have managed to remove the illegal snapshots after changing storage for
the VM:s disks, but we have 3-4 VM:s that will not want to remove the illegal snapshot.
As for us, this issue has escalated the last couple of months.
Is it only us who have these issues or does people not take backup of their VM:s? Feels
like more people should have these issues.
//Magnus
________________________________
From: Joseph Goldman <joseph(a)goldman.id.au>
Sent: 08 December 2020 10:57
To: users(a)ovirt.org <users(a)ovirt.org>
Subject: [ovirt-users] Another illegal disk snapshot problem!
Hi List,
oVirt 4.3
I know there have been threads about this before - but I am unable to find the exact
scenario I am facing.
I have a VM with 3 snapshots - Active, and 2 dated ones (technically created by
vProtect)
After trying to do a fresh snapshot in the GUI it failed out and marked one of the old
snapshot disks as 'illegal' - then the other tried to follow suit.
I tried 'unlocking' the entity using the unlock_entity.sh tool but any action
reverts them back to illegal.
Following previous advice - I can see the VDSM status is all showing LEGAL:
image: 23710238-07c2-46f3-96c0-9061fe1c3e0d
- c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1
status: OK, voltype: INTERNAL, format: RAW, legality: LEGAL, type: SPARSE,
capacity: 107374182400, truesize: 18402942976
- a6d4533b-b0b0-475d-a436-26ce99a38d94
status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, type: SPARSE,
capacity: 107374182400, truesize: 21521768448
- 4b6f7ca1-b70d-4893-b473-d8d30138bb6b
status: OK, voltype: LEAF, format: COW, legality: LEGAL, type: SPARSE,
capacity: 107374182400, truesize: 12617457664
The 2 bold entries are the 'illegal' snapshots.
Looking in the DB see's:
select
image_guid,parentid
,imagestatus
,vm_snapshot_id
,volume_type
,volume_format
,active
from images
where image_group_id='23710238-07c2-46f3-96c0-9061fe1c3e0d';
image_guid | parentid | imagestatus
| vm_snapshot_id | volume_type | volume_format | active
--------------------------------------+--------------------------------------+-------------+--------------------------------------+-------------+---------------+--------
4b6f7ca1-b70d-4893-b473-d8d30138bb6b | a6d4533b-b0b0-475d-a436-26ce99a38d94 | 1
| d5044ae5-dc48-4700-9e46-d61e676c73fc | 2 | 4 | t
c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1 | 00000000-0000-0000-0000-000000000000 | 4
| 57337968-28da-4b03-ac40-134a347d8c11 | 2 | 5 | f
a6d4533b-b0b0-475d-a436-26ce99a38d94 | c3dadf14-bb4e-45a7-8bee-b9a01fe29ae1 | 4
| d2d82724-9fe7-452c-a114-f8d70b555520 | 2 | 4 | f
So from here previous advice has been to do things such as delete the snapshot/disk etc
but thats when they showed as illegal status. I also notice the Active Image is not the
same image that has a parentid of all 00000's so im not sure on the process of
possibly deleting the other snapshots and disks cleanly and/or safely.
Deleting or any tasks in the gui 100% fails and its at a point that if I shut down the
(critical) VM it will not come back on because of these status'.
On top of this what is a good way to take a clean, manual backup of current in use disk
before I start playing with this in case worse comes to worse I have to try build it as a
new server (As at this point I can't trust my vProtect backups)
Any help appreciated.
Thanks,
Joe