That cleaned up the qcow image and qemu-img now reports it's ok, but I still cannot start the VM, get "Cannot prepare illegal volume".Is there some metadata somewhere that needs to be cleaned/reset?---- On Tue, 26 Feb 2019 16:25:22 +0000 Benny Zlotnik <bzlotnik@redhat.com> wrote ----it's because the VM is down, you can manually activate using$ lvchange -a y vgname/lvnameremember to deactivate afterOn Tue, Feb 26, 2019 at 6:15 PM Alan G <alan+ovirt@griff.me.uk> wrote:I tried that initially but I'm not sure how to access the image on block storage? The lv is marked as NOT available in lvdisplay.--- Logical volume ---LV Path /dev/70205101-c6b1-4034-a9a2-e559897273bc/74d27dd2-3887-4833-9ce3-5925dbd551ccLV Name 74d27dd2-3887-4833-9ce3-5925dbd551ccVG Name 70205101-c6b1-4034-a9a2-e559897273bcLV UUID svAB48-Rgnd-0V2A-2O07-Z2Ic-4zfO-XyJiFoLV Write Access read/writeLV Creation host, time nyc-ovirt-01.redacted.com, 2018-05-15 12:02:41 +0000LV Status NOT availableLV Size 14.00 GiBCurrent LE 112Segments 9Allocation inheritRead ahead sectors auto---- On Tue, 26 Feb 2019 15:57:47 +0000 Benny Zlotnik <bzlotnik@redhat.com> wrote ----I haven't found anything other the leaks issue, you can try to run$ qemu-img check -r leaks <img>(make sure to have it backed up)On Tue, Feb 26, 2019 at 5:40 PM Alan G <alan+ovirt@griff.me.uk> wrote:_______________________________________________Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-leave@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log.---- On Tue, 26 Feb 2019 15:11:39 +0000 Benny Zlotnik <bzlotnik@redhat.com> wrote ----Can you provide full vdsm & engine logs?On Tue, Feb 26, 2019 at 5:10 PM Alan G <alan+ovirt@griff.me.uk> wrote:_______________________________________________Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-leave@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/Hi,I performed the following: -1. Shutdown VM.2. Take a snapshot3. Create a clone from snapshot.4. Start the clone. Clone starts fine.5. Attempt to delete snapshot from original VM, fails.6. Attempt to start original VM, fails with "Bad volume specification".This was logged in VDSM during the snapshot deletion attempt.2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872)Traceback (most recent call last):File "/usr/share/vdsm/storage/task.py", line 879, in _runreturn fn(*args, **kargs)File "/usr/share/vdsm/storage/task.py", line 333, in runreturn self.cmd(*self.argslist, **self.argsdict)File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapperreturn method(self, *args, **kwargs)File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMergemerge.finalize(subchainInfo)File "/usr/share/vdsm/storage/merge.py", line 271, in finalizeoptimal_size = subchain.base_vol.optimal_size()File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_sizecheck = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2)File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in checkout = _run_cmd(cmd)File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmdraise QImgError(cmd, rc, out, err)QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={"image-end-offset": 52210892800,"total-clusters": 1638400,"check-errors": 0,"leaks": 323,"leaks-fixed": 0,"allocated-clusters": 795890,"filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f","format": "qcow2","fragmented-clusters": 692941}, stderr=Leaked cluster 81919 refcount=1 reference=0Leaked cluster 81920 refcount=1 reference=0Leaked cluster 81921 refcount=1 reference=0etc..Is there any way to fix these leaked clusters?Running oVirt 4.1.9 with FC block storage.Thanks,Alan_______________________________________________Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-leave@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/