<div dir="ltr">hello, <div>i have problem for delete one snapshot.</div><div>output the script vm-disk-info.py </div><div><br></div><div>Warning: volume 023110fa-7d24-46ec-ada8-d617d7c2adaf is in chain but illegal</div><div> Volumes:</div><div> a09bfb5d-3922-406d-b4e0-daafad96ffec <br></div><div><br></div><div>after running the md5sum command I realized that the volume change is the base:<br></div><div>a09bfb5d-3922-406d-b4e0-daafad96ffec<br></div><div><br></div><div>the disk 023110fa-7d24-46ec-ada8-d617d7c2adaf does not change.</div><div><br></div><div>Thanks.</div><div><br></div><div><br></div><div><br></div><div class="gmail_extra"><div class="gmail_quote">2016-03-18 16:50 GMT-03:00 Greg Padgett <span dir="ltr"><<a href="mailto:gpadgett@redhat.com" target="_blank">gpadgett@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span class="">On 03/18/2016 03:10 PM, Nir Soffer wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
On Fri, Mar 18, 2016 at 7:55 PM, Nathanaël Blanchet <<a href="mailto:blanchet@abes.fr" target="_blank">blanchet@abes.fr</a>> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Hello,<br>
<br>
I can create snapshot when no one exists but I'm not able to remove it<br>
after.<br>
</blockquote>
<br>
Do you try to remove it when the vm is running?<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
It concerns many of my vms, and when stopping them, they can't boot anymore<br>
because of the illegal status of the disks, this leads me in a critical<br>
situation<br>
<br>
VM fedora23 is down with error. Exit message: Unable to get volume size for<br>
domain 5ef8572c-0ab5-4491-994a-e4c30230a525 volume<br>
e5969faa-97ea-41df-809b-cc62161ab1bc<br>
<br>
As far as I didn't initiate any live merge, am I concerned by this bug<br>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1306741" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1306741</a>?<br>
I'm running 3.6.2, will upgrade to 3.6.3 solve this issue?<br>
</blockquote>
<br>
If you tried to remove a snapshot while the vm is running you did<br>
initiate live merge, and this bug may effect you.<br>
<br>
Adding Greg for adding more info about this.<br>
<br>
</blockquote>
<br></span>
Hi Nathanaël,<br>
<br>
>From the logs you pasted below, showing RemoveSnapshotSingleDiskCommand (not ..SingleDiskLiveCommand), it looks like a non-live snapshot. In that case, bug 1306741 would not affect you.<br>
<br>
To dig deeper, we'd need to know the root cause of why the image could not be deleted. You should be able to find some clues in your engine log above the snippet you pasted below, or perhaps something in the vdsm log will reveal the reason.<br>
<br>
Thanks,<br>
Greg<div class=""><div class="h5"><br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
2016-03-18 18:26:57,652 ERROR<br>
[org.ovirt.engine.core.bll.RemoveSnapshotCommand]<br>
(org.ovirt.thread.pool-8-thread-39) [a1e222d] Ending command<br>
'org.ovirt.engine.core.bll.RemoveSnapshotCommand' with failure.<br>
2016-03-18 18:26:57,663 ERROR<br>
[org.ovirt.engine.core.bll.RemoveSnapshotCommand]<br>
(org.ovirt.thread.pool-8-thread-39) [a1e222d] Could not delete image<br>
'46e9ecc8-e168-4f4d-926c-e769f5df1f2c' from snapshot<br>
'88fcf167-4302-405e-825f-ad7e0e9f6564'<br>
2016-03-18 18:26:57,678 WARN<br>
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]<br>
(org.ovirt.thread.pool-8-thread-39) [a1e222d] Correlation ID: a1e222d, Job<br>
ID: 00d3e364-7e47-4022-82ff-f772cd79d4a1, Call Stack: null, Custom Event ID:<br>
-1, Message: Due to partial snapshot removal, Snapshot 'test' of VM<br>
'fedora23' now contains only the following disks: 'fedora23_Disk1'.<br>
2016-03-18 18:26:57,695 ERROR<br>
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand]<br>
(org.ovirt.thread.pool-8-thread-39) [724e99fd] Ending command<br>
'org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand' with failure.<br>
2016-03-18 18:26:57,708 ERROR<br>
[org.ovirt.engine.core.dal.dbbroker.auditloghandlin<br>
<br>
Thank you for your help.<br>
<br>
<br>
Le 23/02/2016 19:51, Greg Padgett a écrit :<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
On 02/22/2016 07:10 AM, Marcelo Leandro wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
Hello,<br>
<br>
The bug with snapshot it will be fixed in ovirt 3.6.3?<br>
<br>
thanks.<br>
<br>
</blockquote>
<br>
Hi Marcelo,<br>
<br>
Yes, the bug below (bug 1301709) is now targeted to 3.6.3.<br>
<br>
Thanks,<br>
Greg<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
2016-02-18 11:34 GMT-03:00 Adam Litke <<a href="mailto:alitke@redhat.com" target="_blank">alitke@redhat.com</a>>:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
On 18/02/16 10:37 +0100, Rik Theys wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
<br>
Hi,<br>
<br>
On 02/17/2016 05:29 PM, Adam Litke wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
<br>
On 17/02/16 11:14 -0500, Greg Padgett wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
<br>
On 02/17/2016 03:42 AM, Rik Theys wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
<br>
Hi,<br>
<br>
On 02/16/2016 10:52 PM, Greg Padgett wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
<br>
On 02/16/2016 08:50 AM, Rik Theys wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
<br>
From the above I conclude that the disk with id that ends with<br>
</blockquote>
<br>
<br>
Similar to what I wrote to Marcelo above in the thread, I'd<br>
recommend<br>
running the "VM disk info gathering tool" attached to [1]. It's<br>
the<br>
best way to ensure the merge was completed and determine which<br>
image<br>
is<br>
the "bad" one that is no longer in use by any volume chains.<br>
</blockquote>
<br>
<br>
<br>
I've ran the disk info gathering tool and this outputs (for the<br>
affected<br>
VM):<br>
<br>
VM lena<br>
Disk b2390535-744f-4c02-bdc8-5a897226554b<br>
(sd:a7ba2db3-517c-408a-8b27-ea45989d6416)<br>
Volumes:<br>
24d78600-22f4-44f7-987b-fbd866736249<br>
<br>
The id of the volume is the ID of the snapshot that is marked<br>
"illegal".<br>
So the "bad" image would be the dc39 one, which according to the UI<br>
is<br>
in use by the "Active VM" snapshot. Can this make sense?<br>
</blockquote>
<br>
<br>
<br>
It looks accurate. Live merges are "backwards" merges, so the merge<br>
would have pushed data from the volume associated with "Active VM"<br>
into the volume associated with the snapshot you're trying to remove.<br>
<br>
Upon completion, we "pivot" so that the VM uses that older volume,<br>
and<br>
we update the engine database to reflect this (basically we<br>
re-associate that older volume with, in your case, "Active VM").<br>
<br>
In your case, it seems the pivot operation was done, but the database<br>
wasn't updated to reflect it. Given snapshot/image associations<br>
e.g.:<br>
<br>
VM Name Snapshot Name Volume<br>
------- ------------- ------<br>
My-VM Active VM 123-abc<br>
My-VM My-Snapshot 789-def<br>
<br>
My-VM in your case is actually running on volume 789-def. If you run<br>
the db fixup script and supply ("My-VM", "My-Snapshot", "123-abc")<br>
(note the volume is the newer, "bad" one), then it will switch the<br>
volume association for you and remove the invalid entries.<br>
<br>
Of course, I'd shut down the VM, and back up the db beforehand.<br>
</blockquote></blockquote>
<br>
<br>
<br>
I've executed the sql script and it seems to have worked. Thanks!<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
"Active VM" should now be unused; it previously (pre-merge) was the<br>
data written since the snapshot was taken. Normally the larger<br>
actual<br>
size might be from qcow format overhead. If your listing above is<br>
complete (ie one volume for the vm), then I'm not sure why the base<br>
volume would have a larger actual size than virtual size.<br>
<br>
Adam, Nir--any thoughts on this?<br>
</blockquote>
<br>
<br>
<br>
There is a bug which has caused inflation of the snapshot volumes when<br>
performing a live merge. We are submitting fixes for 3.5, 3.6, and<br>
master right at this moment.<br>
</blockquote>
<br>
<br>
<br>
Which bug number is assigned to this bug? Will upgrading to a release<br>
with a fix reduce the disk usage again?<br>
</blockquote>
<br>
<br>
<br>
See <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1301709" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1301709</a> for the bug.<br>
It's about a clone disk failure after the problem occurs.<br>
Unfortunately, there is not an automatic way to repair the raw base<br>
volumes if they were affected by this bug. They will need to be<br>
manually shrunk using lvreduce if you are certain that they are<br>
inflated.<br>
<br>
<br>
--<br>
Adam Litke<br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</blockquote>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
<br>
</blockquote>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</blockquote>
<br>
<br>
--<br>
Nathanaël Blanchet<br>
<br>
Supervision réseau<br>
Pôle Infrastrutures Informatiques<br>
227 avenue Professeur-Jean-Louis-Viala<br>
34193 MONTPELLIER CEDEX 5<br>
Tél. 33 (0)4 67 54 84 55<br>
Fax 33 (0)4 67 54 84 14<br>
<a href="mailto:blanchet@abes.fr" target="_blank">blanchet@abes.fr</a><br>
<br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</blockquote></blockquote>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</div></div></blockquote></div><br></div></div>