[ovirt-users] moving disk failed.. remained locked
Gianluca Cecchi
gianluca.cecchi at gmail.com
Tue Feb 21 08:56:10 UTC 2017
On Tue, Feb 21, 2017 at 7:01 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
> On Mon, Feb 20, 2017 at 10:51 PM, Gianluca Cecchi <
> gianluca.cecchi at gmail.com> wrote:
>
>> On Mon, Feb 20, 2017 at 8:46 PM, Fred Rolland <frolland at redhat.com>
>> wrote:
>>
>>> Can you please send the whole logs ? (Engine, vdsm and sanlock)
>>>
>>>
>> vdsm.log.1.xz:
>> https://drive.google.com/file/d/0BwoPbcrMv8mvWTViWEUtNjRtLTg
>> /view?usp=sharing
>>
>> sanlock.log
>> https://drive.google.com/file/d/0BwoPbcrMv8mvcVM4YzZ4aUZLYVU
>> /view?usp=sharing
>>
>> engine.log (gzip format);
>> https://drive.google.com/file/d/0BwoPbcrMv8mvdW80RlFIYkpzenc
>> /view?usp=sharing
>>
>> Thanks,
>> Gianluca
>>
>>
> I didn't say that size of disk is 430Gb and target storage domain is 1Tb,
> almost empty (950Gb free)
> I received a message about problems from the storage where the the disk is
> and so I'm trying to move it so that I can put under maintenance the
> original one and see.
> The errors seem about destination creation of volume and not source...
> thanks,
> Gianluca
>
>
Info on disk:
[g.cecchi at ovmsrv07 ~]$ sudo qemu-img info
/rhev/data-center/588237b8-0031-02f6-035d-000000000136/900b1853-e192-4661-a0f9-7c7c396f6f49/images/f0b5a0e4-ee5d-44a7-ba07-08285791368a/7ed43974-1039-4a68-a8b3-321e7594fe4c
image:
/rhev/data-center/588237b8-0031-02f6-035d-000000000136/900b1853-e192-4661-a0f9-7c7c396f6f49/images/f0b5a0e4-ee5d-44a7-ba07-08285791368a/7ed43974-1039-4a68-a8b3-321e7594fe4c
file format: qcow2
virtual size: 430G (461708984320 bytes)
disk size: 0
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
[g.cecchi at ovmsrv07 ~]$
Based on another command I learnt from another thread, this is what I get
if I check the disk:
[g.cecchi at ovmsrv07 ~]$ sudo qemu-img check
/rhev/data-center/588237b8-0031-02f6-035d-000000000136/900b1853-e192-4661-a0f9-7c7c396f6f49/images/f0b5a0e4-ee5d-44a7-ba07-08285791368a/7ed43974-1039-4a68-a8b3-321e7594fe4c
Leaked cluster 4013995 refcount=1 reference=0
Leaked cluster 4013996 refcount=1 reference=0
Leaked cluster 4013997 refcount=1 reference=0
... many lines of this type ...
Leaked cluster 6275183 refcount=1 reference=0
Leaked cluster 6275184 refcount=1 reference=0
Leaked cluster 6275185 refcount=1 reference=0
57506 leaked clusters were found on the image.
This means waste of disk space, but no harm to data.
6599964/7045120 = 93.68% allocated, 6.30% fragmented, 0.00% compressed
clusters
Image end offset: 436986380288
Can it help in any way to shutdown the VM to unlock the disk?
Thanks,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170221/a4b4b379/attachment.html>
More information about the Users
mailing list