Hi Ala,
that did not help. VDSM log tells me that the delta qcow2 file is missing:
Traceback (most recent call last):File "/usr/share/vdsm/storage/task.py", line 873, in _run return fn(*args, **kargs)File "/usr/share/vdsm/logUtils.py", line 49, in wrapperres = f(*args, **kwargs)File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo volUUID=volUUID).getInfo()File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume volUUID)File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__ volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)File "/usr/share/vdsm/storage/volume.py", line 181, in __init__ self.validate()File "/usr/share/vdsm/storage/volume.py", line 194, in validate self.validateVolumePath()File "/usr/share/vdsm/storage/fileVolume.py", line 540, in validateVolumePath raise se.VolumeDoesNotExist(self.volUUID) VolumeDoesNotExist: Volume does not exist: (u'c277351d-e2b1-4057-aafb-55d4b607ebae',) ...Thread-196::ERROR::2016-10-09 19:31:07,037::utils::739::root::(wrapper) Unhandled exception Traceback (most recent call last):File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 736, in wrapper return f(*a, **kw)File "/usr/share/vdsm/virt/vm.py", line 5264, in runself.update_base_size()File "/usr/share/vdsm/virt/vm.py", line 5257, in update_base_sizeself.drive.imageID, topVolUUID)File "/usr/share/vdsm/virt/vm.py", line 5191, in _getVolumeInfo(domainID, volumeID))StorageUnavailableError: Unable to get volume info for domain 47202573-6e83-42fd-a274-d11f05eca2dd volume c277351d-e2b1-4057-aafb- 55d4b607ebae
Do you have any idea?
Markus________________________
Von: Ala Hino [ahino@redhat.com]Gesendet: Donnerstag, 6. Oktober 2016 12:29
An: Markus Stockhausen
Betreff: Re: [ovirt-users] Cleanup illegal snapshot
Indeed, retry live merge. There is no harm in retrying live merge. As mentioned, if the image deleted at storage side, retrying live merge should clean the engine side.
On Thu, Oct 6, 2016 at 1:06 PM, Markus Stockhausen <stockhausen@collogia.de> wrote:
Hi,
we are on OVirt 4.0.4. As explained the situation is as follows:
- On Disk we have the base image and the delata qcow2 file- Qemu runs only on the base image- The snapshot in Qemu is tagged as illegal
So you say: "Just retry a live merge and everything will cleanup."Did I get it right?
Markus
-----------------------------------------------
Von: Ala Hino [ahino@redhat.com]Gesendet: Donnerstag, 6. Oktober 2016 11:21
An: Markus Stockhausen
Cc: Ovirt Users; Nir Soffer; Adam Litke
Betreff: Re: [ovirt-users] Cleanup illegal snapshot
Hi Markus,
What's the version that you are using?In oVirt 3.6.6, illegal snapshots could be removed by retrying to live merge them again. Assuming the previous live merge of the snapshot successfully completed but the engine failed to get the result, the second live merge should do the necessary cleanups at the engine side. See https://bugzilla.redhat.com/1323629
Hope this helps,Ala
On Thu, Oct 6, 2016 at 11:53 AM, Markus Stockhausen <stockhausen@collogia.de> wrote:
Hi Ala,
> Von: Adam Litke [alitke@redhat.com]
> Gesendet: Freitag, 30. September 2016 15:54
> An: Markus Stockhausen
> Cc: Ovirt Users; Ala Hino; Nir Soffer
> Betreff: Re: [ovirt-users] Cleanup illegal snapshot
>
> On 30/09/16 05:47 +0000, Markus Stockhausen wrote:
> >Hi,
> >
> >if a OVirt snapshot is illegal we might have 2 situations.
> >
> >1) qemu is still using it - lsof shows qemu access to the base raw and the
> >delta qcow2 file. -> E.g. a previous live merge failed. In the past we
> >successfully solved that situation by setting the status of the delta image
> >in the database to OK.
> >
> >2) qemu is no longer using it. lsof shows qemu access only to the the base
> >raw file -> E.g. a previous live merge succeded in qemu but Ovirt did not
> >recognize.
> >
> >How to clean up the 2nd situation?
>
> It seems that you will have to first clean up the engine database to
> remove references to the snapshot that no longer exists. Then you
> will need to remove the unused qcow2 volume.
>
> Unfortunately I cannot provide safe instructions for modifying the
> database but maybe Ala Hino (added to CC:) will be able to help with
> that.
Do you have some tip for me?
>
> One you have fixed the DB you should be able to delete the volume
> using a vdsm verb on the SPM host:
>
> # vdsClient -s 0 deleteVolume <sdUUID> <spUUID> <imgUUID> <volUUID>