Cannot start VM: Bad volume specification

Hi, I performed the following: - 1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification". This was logged in VDSM during the snapshot deletion attempt. 2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc.. Is there any way to fix these leaked clusters? Running oVirt 4.1.9 with FC block storage. Thanks, Alan

Can you provide full vdsm & engine logs? On Tue, Feb 26, 2019 at 5:10 PM Alan G <alan+ovirt@griff.me.uk> wrote:
Hi,
I performed the following: -
1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification".
This was logged in VDSM during the snapshot deletion attempt.
2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc..
Is there any way to fix these leaked clusters?
Running oVirt 4.1.9 with FC block storage.
Thanks,
Alan
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log. ---- On Tue, 26 Feb 2019 15:11:39 +0000 Benny Zlotnik <bzlotnik@redhat.com> wrote ---- Can you provide full vdsm & engine logs? On Tue, Feb 26, 2019 at 5:10 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WL... Hi, I performed the following: - 1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification". This was logged in VDSM during the snapshot deletion attempt. 2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc.. Is there any way to fix these leaked clusters? Running oVirt 4.1.9 with FC block storage. Thanks, Alan _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

I haven't found anything other the leaks issue, you can try to run $ qemu-img check -r leaks <img> (make sure to have it backed up) On Tue, Feb 26, 2019 at 5:40 PM Alan G <alan+ovirt@griff.me.uk> wrote:
Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log.
---- On Tue, 26 Feb 2019 15:11:39 +0000 *Benny Zlotnik <bzlotnik@redhat.com <bzlotnik@redhat.com>>* wrote ----
Can you provide full vdsm & engine logs?
On Tue, Feb 26, 2019 at 5:10 PM Alan G <alan+ovirt@griff.me.uk> wrote:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WL...
Hi,
I performed the following: -
1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification".
This was logged in VDSM during the snapshot deletion attempt.
2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc..
Is there any way to fix these leaked clusters?
Running oVirt 4.1.9 with FC block storage.
Thanks,
Alan
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

I tried that initially but I'm not sure how to access the image on block storage? The lv is marked as NOT available in lvdisplay. --- Logical volume --- LV Path /dev/70205101-c6b1-4034-a9a2-e559897273bc/74d27dd2-3887-4833-9ce3-5925dbd551cc LV Name 74d27dd2-3887-4833-9ce3-5925dbd551cc VG Name 70205101-c6b1-4034-a9a2-e559897273bc LV UUID svAB48-Rgnd-0V2A-2O07-Z2Ic-4zfO-XyJiFo LV Write Access read/write LV Creation host, time nyc-ovirt-01.redacted.com, 2018-05-15 12:02:41 +0000 LV Status NOT available LV Size 14.00 GiB Current LE 112 Segments 9 Allocation inherit Read ahead sectors auto ---- On Tue, 26 Feb 2019 15:57:47 +0000 Benny Zlotnik <bzlotnik@redhat.com> wrote ---- I haven't found anything other the leaks issue, you can try to run $ qemu-img check -r leaks <img> (make sure to have it backed up) On Tue, Feb 26, 2019 at 5:40 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCTVJ4K7QNA653... Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log. ---- On Tue, 26 Feb 2019 15:11:39 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- Can you provide full vdsm & engine logs? On Tue, Feb 26, 2019 at 5:10 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WL... Hi, I performed the following: - 1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification". This was logged in VDSM during the snapshot deletion attempt. 2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc.. Is there any way to fix these leaked clusters? Running oVirt 4.1.9 with FC block storage. Thanks, Alan _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

Is it as simple as doing: lvchange ---activate <LV-name> ---- On Tue, 26 Feb 2019 16:15:32 +0000 Alan G <alan+ovirt@griff.me.uk> wrote ---- I tried that initially but I'm not sure how to access the image on block storage? The lv is marked as NOT available in lvdisplay. --- Logical volume --- LV Path /dev/70205101-c6b1-4034-a9a2-e559897273bc/74d27dd2-3887-4833-9ce3-5925dbd551cc LV Name 74d27dd2-3887-4833-9ce3-5925dbd551cc VG Name 70205101-c6b1-4034-a9a2-e559897273bc LV UUID svAB48-Rgnd-0V2A-2O07-Z2Ic-4zfO-XyJiFo LV Write Access read/write LV Creation host, time nyc-ovirt-01.redacted.com, 2018-05-15 12:02:41 +0000 LV Status NOT available LV Size 14.00 GiB Current LE 112 Segments 9 Allocation inherit Read ahead sectors auto ---- On Tue, 26 Feb 2019 15:57:47 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- I haven't found anything other the leaks issue, you can try to run $ qemu-img check -r leaks <img> (make sure to have it backed up) On Tue, Feb 26, 2019 at 5:40 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCTVJ4K7QNA653... Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log. ---- On Tue, 26 Feb 2019 15:11:39 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- Can you provide full vdsm & engine logs? On Tue, Feb 26, 2019 at 5:10 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WL... Hi, I performed the following: - 1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification". This was logged in VDSM during the snapshot deletion attempt. 2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc.. Is there any way to fix these leaked clusters? Running oVirt 4.1.9 with FC block storage. Thanks, Alan _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

it's because the VM is down, you can manually activate using $ lvchange -a y vgname/lvname remember to deactivate after On Tue, Feb 26, 2019 at 6:15 PM Alan G <alan+ovirt@griff.me.uk> wrote:
I tried that initially but I'm not sure how to access the image on block storage? The lv is marked as NOT available in lvdisplay.
--- Logical volume --- LV Path /dev/70205101-c6b1-4034-a9a2-e559897273bc/74d27dd2-3887-4833-9ce3-5925dbd551cc LV Name 74d27dd2-3887-4833-9ce3-5925dbd551cc VG Name 70205101-c6b1-4034-a9a2-e559897273bc LV UUID svAB48-Rgnd-0V2A-2O07-Z2Ic-4zfO-XyJiFo LV Write Access read/write LV Creation host, time nyc-ovirt-01.redacted.com, 2018-05-15 12:02:41 +0000 LV Status NOT available LV Size 14.00 GiB Current LE 112 Segments 9 Allocation inherit Read ahead sectors auto
---- On Tue, 26 Feb 2019 15:57:47 +0000 *Benny Zlotnik <bzlotnik@redhat.com <bzlotnik@redhat.com>>* wrote ----
I haven't found anything other the leaks issue, you can try to run $ qemu-img check -r leaks <img> (make sure to have it backed up)
On Tue, Feb 26, 2019 at 5:40 PM Alan G <alan+ovirt@griff.me.uk> wrote:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCTVJ4K7QNA653...
Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log.
---- On Tue, 26 Feb 2019 15:11:39 +0000 *Benny Zlotnik <bzlotnik@redhat.com <bzlotnik@redhat.com>>* wrote ----
Can you provide full vdsm & engine logs?
On Tue, Feb 26, 2019 at 5:10 PM Alan G <alan+ovirt@griff.me.uk> wrote:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WL...
Hi,
I performed the following: -
1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification".
This was logged in VDSM during the snapshot deletion attempt.
2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc..
Is there any way to fix these leaked clusters?
Running oVirt 4.1.9 with FC block storage.
Thanks,
Alan
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

That cleaned up the qcow image and qemu-img now reports it's ok, but I still cannot start the VM, get "Cannot prepare illegal volume". Is there some metadata somewhere that needs to be cleaned/reset? ---- On Tue, 26 Feb 2019 16:25:22 +0000 Benny Zlotnik <bzlotnik@redhat.com> wrote ---- it's because the VM is down, you can manually activate using $ lvchange -a y vgname/lvname remember to deactivate after On Tue, Feb 26, 2019 at 6:15 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: I tried that initially but I'm not sure how to access the image on block storage? The lv is marked as NOT available in lvdisplay. --- Logical volume --- LV Path /dev/70205101-c6b1-4034-a9a2-e559897273bc/74d27dd2-3887-4833-9ce3-5925dbd551cc LV Name 74d27dd2-3887-4833-9ce3-5925dbd551cc VG Name 70205101-c6b1-4034-a9a2-e559897273bc LV UUID svAB48-Rgnd-0V2A-2O07-Z2Ic-4zfO-XyJiFo LV Write Access read/write LV Creation host, time http://nyc-ovirt-01.redacted.com, 2018-05-15 12:02:41 +0000 LV Status NOT available LV Size 14.00 GiB Current LE 112 Segments 9 Allocation inherit Read ahead sectors auto ---- On Tue, 26 Feb 2019 15:57:47 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- I haven't found anything other the leaks issue, you can try to run $ qemu-img check -r leaks <img> (make sure to have it backed up) On Tue, Feb 26, 2019 at 5:40 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCTVJ4K7QNA653... Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log. ---- On Tue, 26 Feb 2019 15:11:39 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- Can you provide full vdsm & engine logs? On Tue, Feb 26, 2019 at 5:10 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WL... Hi, I performed the following: - 1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification". This was logged in VDSM during the snapshot deletion attempt. 2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc.. Is there any way to fix these leaked clusters? Running oVirt 4.1.9 with FC block storage. Thanks, Alan _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

Can you remove the snapshot now? On Tue, Feb 26, 2019 at 7:06 PM Alan G <alan+ovirt@griff.me.uk> wrote:
That cleaned up the qcow image and qemu-img now reports it's ok, but I still cannot start the VM, get "Cannot prepare illegal volume".
Is there some metadata somewhere that needs to be cleaned/reset?
---- On Tue, 26 Feb 2019 16:25:22 +0000 *Benny Zlotnik <bzlotnik@redhat.com <bzlotnik@redhat.com>>* wrote ----
it's because the VM is down, you can manually activate using $ lvchange -a y vgname/lvname
remember to deactivate after
On Tue, Feb 26, 2019 at 6:15 PM Alan G <alan+ovirt@griff.me.uk> wrote:
I tried that initially but I'm not sure how to access the image on block storage? The lv is marked as NOT available in lvdisplay.
--- Logical volume --- LV Path /dev/70205101-c6b1-4034-a9a2-e559897273bc/74d27dd2-3887-4833-9ce3-5925dbd551cc LV Name 74d27dd2-3887-4833-9ce3-5925dbd551cc VG Name 70205101-c6b1-4034-a9a2-e559897273bc LV UUID svAB48-Rgnd-0V2A-2O07-Z2Ic-4zfO-XyJiFo LV Write Access read/write LV Creation host, time nyc-ovirt-01.redacted.com, 2018-05-15 12:02:41 +0000 LV Status NOT available LV Size 14.00 GiB Current LE 112 Segments 9 Allocation inherit Read ahead sectors auto
---- On Tue, 26 Feb 2019 15:57:47 +0000 *Benny Zlotnik <bzlotnik@redhat.com <bzlotnik@redhat.com>>* wrote ----
I haven't found anything other the leaks issue, you can try to run $ qemu-img check -r leaks <img> (make sure to have it backed up)
On Tue, Feb 26, 2019 at 5:40 PM Alan G <alan+ovirt@griff.me.uk> wrote:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCTVJ4K7QNA653...
Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log.
---- On Tue, 26 Feb 2019 15:11:39 +0000 *Benny Zlotnik <bzlotnik@redhat.com <bzlotnik@redhat.com>>* wrote ----
Can you provide full vdsm & engine logs?
On Tue, Feb 26, 2019 at 5:10 PM Alan G <alan+ovirt@griff.me.uk> wrote:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WL...
Hi,
I performed the following: -
1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification".
This was logged in VDSM during the snapshot deletion attempt.
2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc..
Is there any way to fix these leaked clusters?
Running oVirt 4.1.9 with FC block storage.
Thanks,
Alan
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

Still cannot remove snapshot. 2019-02-27 08:56:58,781+0000 ERROR (tasks/6) [storage.TaskManager.Task] (Task='72570267-d86a-4a14-a8bb-fc925a717753') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1884, in prepareMerge merge.prepare(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 181, in prepare with subchain.prepare(): File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/usr/share/vdsm/storage/merge.py", line 160, in prepare vol.prepare(rw=rw, justme=True) File "/usr/share/vdsm/storage/volume.py", line 562, in prepare raise se.prepareIllegalVolumeError(self.volUUID) prepareIllegalVolumeError: Cannot prepare illegal volume: ('5f5b436d-6c48-4b9f-a68c-f67d666741ab',) ---- On Tue, 26 Feb 2019 18:28:02 +0000 Benny Zlotnik <bzlotnik@redhat.com> wrote ---- Can you remove the snapshot now? On Tue, Feb 26, 2019 at 7:06 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FFRIVWZACG2E6W... That cleaned up the qcow image and qemu-img now reports it's ok, but I still cannot start the VM, get "Cannot prepare illegal volume". Is there some metadata somewhere that needs to be cleaned/reset? ---- On Tue, 26 Feb 2019 16:25:22 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- it's because the VM is down, you can manually activate using $ lvchange -a y vgname/lvname remember to deactivate after On Tue, Feb 26, 2019 at 6:15 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: I tried that initially but I'm not sure how to access the image on block storage? The lv is marked as NOT available in lvdisplay. --- Logical volume --- LV Path /dev/70205101-c6b1-4034-a9a2-e559897273bc/74d27dd2-3887-4833-9ce3-5925dbd551cc LV Name 74d27dd2-3887-4833-9ce3-5925dbd551cc VG Name 70205101-c6b1-4034-a9a2-e559897273bc LV UUID svAB48-Rgnd-0V2A-2O07-Z2Ic-4zfO-XyJiFo LV Write Access read/write LV Creation host, time http://nyc-ovirt-01.redacted.com, 2018-05-15 12:02:41 +0000 LV Status NOT available LV Size 14.00 GiB Current LE 112 Segments 9 Allocation inherit Read ahead sectors auto ---- On Tue, 26 Feb 2019 15:57:47 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- I haven't found anything other the leaks issue, you can try to run $ qemu-img check -r leaks <img> (make sure to have it backed up) On Tue, Feb 26, 2019 at 5:40 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCTVJ4K7QNA653... Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log. ---- On Tue, 26 Feb 2019 15:11:39 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- Can you provide full vdsm & engine logs? On Tue, Feb 26, 2019 at 5:10 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WL... Hi, I performed the following: - 1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification". This was logged in VDSM during the snapshot deletion attempt. 2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc.. Is there any way to fix these leaked clusters? Running oVirt 4.1.9 with FC block storage. Thanks, Alan _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

Can you provide the output of vdsm-tool dump-volume-chains <sd_id>? On Wed, Feb 27, 2019 at 11:45 AM Alan G <alan+ovirt@griff.me.uk> wrote:
Still cannot remove snapshot.
2019-02-27 08:56:58,781+0000 ERROR (tasks/6) [storage.TaskManager.Task] (Task='72570267-d86a-4a14-a8bb-fc925a717753') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1884, in prepareMerge merge.prepare(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 181, in prepare with subchain.prepare(): File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/usr/share/vdsm/storage/merge.py", line 160, in prepare vol.prepare(rw=rw, justme=True) File "/usr/share/vdsm/storage/volume.py", line 562, in prepare raise se.prepareIllegalVolumeError(self.volUUID) prepareIllegalVolumeError: Cannot prepare illegal volume: ('5f5b436d-6c48-4b9f-a68c-f67d666741ab',)
---- On Tue, 26 Feb 2019 18:28:02 +0000 *Benny Zlotnik <bzlotnik@redhat.com <bzlotnik@redhat.com>>* wrote ----
Can you remove the snapshot now?
On Tue, Feb 26, 2019 at 7:06 PM Alan G <alan+ovirt@griff.me.uk> wrote:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FFRIVWZACG2E6W...
That cleaned up the qcow image and qemu-img now reports it's ok, but I still cannot start the VM, get "Cannot prepare illegal volume".
Is there some metadata somewhere that needs to be cleaned/reset?
---- On Tue, 26 Feb 2019 16:25:22 +0000 *Benny Zlotnik <bzlotnik@redhat.com <bzlotnik@redhat.com>>* wrote ----
it's because the VM is down, you can manually activate using $ lvchange -a y vgname/lvname
remember to deactivate after
On Tue, Feb 26, 2019 at 6:15 PM Alan G <alan+ovirt@griff.me.uk> wrote:
I tried that initially but I'm not sure how to access the image on block storage? The lv is marked as NOT available in lvdisplay.
--- Logical volume --- LV Path /dev/70205101-c6b1-4034-a9a2-e559897273bc/74d27dd2-3887-4833-9ce3-5925dbd551cc LV Name 74d27dd2-3887-4833-9ce3-5925dbd551cc VG Name 70205101-c6b1-4034-a9a2-e559897273bc LV UUID svAB48-Rgnd-0V2A-2O07-Z2Ic-4zfO-XyJiFo LV Write Access read/write LV Creation host, time nyc-ovirt-01.redacted.com, 2018-05-15 12:02:41 +0000 LV Status NOT available LV Size 14.00 GiB Current LE 112 Segments 9 Allocation inherit Read ahead sectors auto
---- On Tue, 26 Feb 2019 15:57:47 +0000 *Benny Zlotnik <bzlotnik@redhat.com <bzlotnik@redhat.com>>* wrote ----
I haven't found anything other the leaks issue, you can try to run $ qemu-img check -r leaks <img> (make sure to have it backed up)
On Tue, Feb 26, 2019 at 5:40 PM Alan G <alan+ovirt@griff.me.uk> wrote:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCTVJ4K7QNA653...
Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log.
---- On Tue, 26 Feb 2019 15:11:39 +0000 *Benny Zlotnik <bzlotnik@redhat.com <bzlotnik@redhat.com>>* wrote ----
Can you provide full vdsm & engine logs?
On Tue, Feb 26, 2019 at 5:10 PM Alan G <alan+ovirt@griff.me.uk> wrote:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WL...
Hi,
I performed the following: -
1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification".
This was logged in VDSM during the snapshot deletion attempt.
2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc..
Is there any way to fix these leaked clusters?
Running oVirt 4.1.9 with FC block storage.
Thanks,
Alan
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

HI, Sorry for delay in reply, been away the last few days. I believe this is the relevant section from the dump. image: f7dea7bd-046c-4923-b5a5-d0c1201607fc - ac540314-989d-42c2-9e7e-3907eedbe27f status: OK, voltype: INTERNAL, format: COW, legality: LEGAL, type: SPARSE - 5f5b436d-6c48-4b9f-a68c-f67d666741ab status: ILLEGAL, voltype: LEAF, format: COW, legality: ILLEGAL, type: SPARSE Alan ---- On Wed, 27 Feb 2019 17:43:35 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- Can you provide the output of vdsm-tool dump-volume-chains <sd_id>? On Wed, Feb 27, 2019 at 11:45 AM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BQ7WCL7U6MYOTX... Still cannot remove snapshot. 2019-02-27 08:56:58,781+0000 ERROR (tasks/6) [storage.TaskManager.Task] (Task='72570267-d86a-4a14-a8bb-fc925a717753') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1884, in prepareMerge merge.prepare(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 181, in prepare with subchain.prepare(): File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/usr/share/vdsm/storage/merge.py", line 160, in prepare vol.prepare(rw=rw, justme=True) File "/usr/share/vdsm/storage/volume.py", line 562, in prepare raise se.prepareIllegalVolumeError(self.volUUID) prepareIllegalVolumeError: Cannot prepare illegal volume: ('5f5b436d-6c48-4b9f-a68c-f67d666741ab',) ---- On Tue, 26 Feb 2019 18:28:02 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- Can you remove the snapshot now? On Tue, Feb 26, 2019 at 7:06 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FFRIVWZACG2E6W... That cleaned up the qcow image and qemu-img now reports it's ok, but I still cannot start the VM, get "Cannot prepare illegal volume". Is there some metadata somewhere that needs to be cleaned/reset? ---- On Tue, 26 Feb 2019 16:25:22 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- it's because the VM is down, you can manually activate using $ lvchange -a y vgname/lvname remember to deactivate after On Tue, Feb 26, 2019 at 6:15 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: I tried that initially but I'm not sure how to access the image on block storage? The lv is marked as NOT available in lvdisplay. --- Logical volume --- LV Path /dev/70205101-c6b1-4034-a9a2-e559897273bc/74d27dd2-3887-4833-9ce3-5925dbd551cc LV Name 74d27dd2-3887-4833-9ce3-5925dbd551cc VG Name 70205101-c6b1-4034-a9a2-e559897273bc LV UUID svAB48-Rgnd-0V2A-2O07-Z2Ic-4zfO-XyJiFo LV Write Access read/write LV Creation host, time http://nyc-ovirt-01.redacted.com, 2018-05-15 12:02:41 +0000 LV Status NOT available LV Size 14.00 GiB Current LE 112 Segments 9 Allocation inherit Read ahead sectors auto ---- On Tue, 26 Feb 2019 15:57:47 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- I haven't found anything other the leaks issue, you can try to run MAILDRAFTCONTENTnbsp;qemu-img check -r leaks <img> (make sure to have it backed up) On Tue, Feb 26, 2019 at 5:40 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCTVJ4K7QNA653... Logs are attached. The first error from snapshot deletion is at 2019-02-26 13:27:11,877Z in the engine log. ---- On Tue, 26 Feb 2019 15:11:39 +0000 Benny Zlotnik <mailto:bzlotnik@redhat.com> wrote ---- Can you provide full vdsm & engine logs? On Tue, Feb 26, 2019 at 5:10 PM Alan G <mailto:alan%2Bovirt@griff.me.uk> wrote: _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WL... Hi, I performed the following: - 1. Shutdown VM. 2. Take a snapshot 3. Create a clone from snapshot. 4. Start the clone. Clone starts fine. 5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification". This was logged in VDSM during the snapshot deletion attempt. 2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] (Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 879, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 333, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge merge.finalize(subchainInfo) File "/usr/share/vdsm/storage/merge.py", line 271, in finalize optimal_size = subchain.base_vol.optimal_size() File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check out = _run_cmd(cmd) File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd raise QImgError(cmd, rc, out, err) QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc.. Is there any way to fix these leaked clusters? Running oVirt 4.1.9 with FC block storage. Thanks, Alan _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FE...

On Tue, Feb 26, 2019 at 5:10 PM Alan G <alan+ovirt@griff.me.uk> wrote: ...
I performed the following: -
...
5. Attempt to delete snapshot from original VM, fails. 6. Attempt to start original VM, fails with "Bad volume specification".
This was logged in VDSM during the snapshot deletion attempt.
...
QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed- 87e5-1c8681fdd177/images/f7dea7bd-04 6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, stdout={ "image-end-offset": 52210892800, "total-clusters": 1638400, "check-errors": 0, "leaks": 323, "leaks-fixed": 0, "allocated-clusters": 795890, "filename": "/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f", "format": "qcow2", "fragmented-clusters": 692941 } , stderr=Leaked cluster 81919 refcount=1 reference=0 Leaked cluster 81920 refcount=1 reference=0 Leaked cluster 81921 refcount=1 reference=0 etc..
This means your image may waste some disk space (about 20 MiB) but there is no harm to your data. Vdsm was fixed to handle this case since ovirt-4.2.0. See https://bugzilla.redhat.com/1502488
Running oVirt 4.1.9 with FC block storage.
4.1 is not supported now. You should upgrade to 4.2. Nir
participants (4)
-
Alan
-
Alan G
-
Benny Zlotnik
-
Nir Soffer