Due to urgency of the case, I fetched the backup copy from weekend and proceeded to push missing data to VM (the VM is a git repo). I lost few notes, though not much damage was done...
I'm starting to feel uncomfortable with this solution though and might switch (at least the production VMs) to plain KVM where I had never experienced such issues.

Alex

On Wed, Jul 11, 2018 at 7:27 AM, Yedidyah Bar David <didi@redhat.com> wrote:
(Changing subject, adding Freddy)

On Tue, Jul 10, 2018 at 8:06 PM, Alex K <rightkicktech@gmail.com> wrote:
Hi all,

I did a routine maintenance today (updating the hosts) to ovirt cluster (4.2) and I have one VM that was complaining about an invalid snapshot. After shutdown of VM the VM is not able to start again, giving the error:

VM Gitlab is down with error. Exit message: Bad volume specification {'serial': 'b6af2856-a164-484a-afe5-9836bbdd14e8', 'index': 0, 'iface': 'virtio', 'apparentsize': '51838976', 'specParams': {}, 'cache': 'none', 'imageID': 'b6af2856-a164-484a-afe5-9836bbdd14e8', 'truesize': '52011008', 'type': 'disk', 'domainID': '142bbde6-ef9d-4a52-b9da-2de533c1f1bd', 'reqsize': '0', 'format': 'cow', 'poolID': '00000001-0001-0001-0001-000000000311', 'device': 'disk', 'path': '/rhev/data-center/00000001-0001-0001-0001-000000000311/142bbde6-ef9d-4a52-b9da-2de533c1f1bd/images/b6af2856-a164-484a-afe5-9836bbdd14e8/f3125f62-c909-472f-919c-844e0b8c156d', 'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID': 'f3125f62-c909-472f-919c-844e0b8c156d', 'diskType': 'file', 'alias': 'ua-b6af2856-a164-484a-afe5-9836bbdd14e8', 'discard': False}.

I see also the following error:

VDSM command CopyImageVDS failed: Image is not a legal chain: (u'b6af2856-a164-484a-afe5-9836bbdd14e8',)

This error appears a few more times in the list's archive, all of which seem to be related to rather-old bugs (3.5/3.6 times) or storage problems. I assume you use 4.2. Are you sure the corruption happened only now? Did working with snapshots worked well before the upgrade?
 

Seems as a corrupt VM disk?

Seems so to me, but I am not a storage expert.
 

The VM had 3 snapshots. I was able to delete one from GUI then am not able to delete the other two as the task fails. Generally I am not allowed to clone, export or do sth to the VM.
 

Have you encountered sth similar. Any advice?

The lastest post, from 2016, included a workaround, you might (very carefully!) try that.

I suggest to also open a bug and attach all relevant logs (engine, vdsm from all relevant hosts, including SPMs at time of snapshot operations and any other host that ran the VM), and try to give accurate reproduction steps.

Best regards,
--
Didi