Hi Marcelo,
Yes, the bug below (bug 1301709) is now targeted to 3.6.3.
Thanks,
Greg
2016-02-18 11:34 GMT-03:00 Adam Litke <alitke(a)redhat.com>:
> On 18/02/16 10:37 +0100, Rik Theys wrote:
>>
>> Hi,
>>
>> On 02/17/2016 05:29 PM, Adam Litke wrote:
>>>
>>> On 17/02/16 11:14 -0500, Greg Padgett wrote:
>>>>
>>>> On 02/17/2016 03:42 AM, Rik Theys wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> On 02/16/2016 10:52 PM, Greg Padgett wrote:
>>>>>>
>>>>>> On 02/16/2016 08:50 AM, Rik Theys wrote:
>>>>>>>
>>>>>>> From the above I conclude that the disk with id that ends
with
>>>>>>
>>>>>> Similar to what I wrote to Marcelo above in the thread, I'd
recommend
>>>>>> running the "VM disk info gathering tool" attached to
[1]. It's the
>>>>>> best way to ensure the merge was completed and determine which
image
>>>>>> is
>>>>>> the "bad" one that is no longer in use by any volume
chains.
>>>>>
>>>>>
>>>>> I've ran the disk info gathering tool and this outputs (for the
>>>>> affected
>>>>> VM):
>>>>>
>>>>> VM lena
>>>>> Disk b2390535-744f-4c02-bdc8-5a897226554b
>>>>> (sd:a7ba2db3-517c-408a-8b27-ea45989d6416)
>>>>> Volumes:
>>>>> 24d78600-22f4-44f7-987b-fbd866736249
>>>>>
>>>>> The id of the volume is the ID of the snapshot that is marked
>>>>> "illegal".
>>>>> So the "bad" image would be the dc39 one, which according
to the UI is
>>>>> in use by the "Active VM" snapshot. Can this make sense?
>>>>
>>>>
>>>> It looks accurate. Live merges are "backwards" merges, so the
merge
>>>> would have pushed data from the volume associated with "Active
VM"
>>>> into the volume associated with the snapshot you're trying to
remove.
>>>>
>>>> Upon completion, we "pivot" so that the VM uses that older
volume, and
>>>> we update the engine database to reflect this (basically we
>>>> re-associate that older volume with, in your case, "Active
VM").
>>>>
>>>> In your case, it seems the pivot operation was done, but the database
>>>> wasn't updated to reflect it. Given snapshot/image associations
e.g.:
>>>>
>>>> VM Name Snapshot Name Volume
>>>> ------- ------------- ------
>>>> My-VM Active VM 123-abc
>>>> My-VM My-Snapshot 789-def
>>>>
>>>> My-VM in your case is actually running on volume 789-def. If you run
>>>> the db fixup script and supply ("My-VM",
"My-Snapshot", "123-abc")
>>>> (note the volume is the newer, "bad" one), then it will switch
the
>>>> volume association for you and remove the invalid entries.
>>>>
>>>> Of course, I'd shut down the VM, and back up the db beforehand.
>>
>>
>> I've executed the sql script and it seems to have worked. Thanks!
>>
>>>> "Active VM" should now be unused; it previously (pre-merge) was
the
>>>> data written since the snapshot was taken. Normally the larger actual
>>>> size might be from qcow format overhead. If your listing above is
>>>> complete (ie one volume for the vm), then I'm not sure why the base
>>>> volume would have a larger actual size than virtual size.
>>>>
>>>> Adam, Nir--any thoughts on this?
>>>
>>>
>>> There is a bug which has caused inflation of the snapshot volumes when
>>> performing a live merge. We are submitting fixes for 3.5, 3.6, and
>>> master right at this moment.
>>
>>
>> Which bug number is assigned to this bug? Will upgrading to a release
>> with a fix reduce the disk usage again?
>
>
> See
https://bugzilla.redhat.com/show_bug.cgi?id=1301709 for the bug.
> It's about a clone disk failure after the problem occurs.
> Unfortunately, there is not an automatic way to repair the raw base
> volumes if they were affected by this bug. They will need to be
> manually shrunk using lvreduce if you are certain that they are
> inflated.
>
>
> --
> Adam Litke
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users