On Fri, Nov 17, 2017 at 9:50 AM Demeter Tibor <tdemeter(a)itsmart.hu> wrote:
Hi,
https://pastebin.com/raw/cFrphHHh
The problematic disk is
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none'
error_policy='stop' io='threads'/>
<source
file='/rhev/data-center/bcc17ec5-3ba3-4664-ba47-24fde9d92a2a/d2f65815-5f25-4740-ba56-76749a0f4571/images/d1db6e0e-87d0-4124-8326-b18bff3fdf90/5974fd33-af4c-4e3b-aadb-bece6054eb6b'>
<seclabel model='selinux' relabel='no'/>
</source>
<target dev='vdc' bus='virtio'/>
<serial>d1db6e0e-87d0-4124-8326-b18bff3fdf90</serial>
<alias name='virtio-disk2'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x08' function='0x0'/>
</disk>
Why do you think it is not correct?
To check that the drive is indeed raw, please share the output of:
qemu-img info
/rhev/data-center/bcc17ec5-3ba3-4664-ba47-24fde9d92a2a/d2f65815-5f25-4740-ba56-76749a0f4571/images/d1db6e0e-87d0-4124-8326-b18bff3fdf90/5974fd33-af4c-4e3b-aadb-bece6054eb6b
for n in
/rhev/data-center/bcc17ec5-3ba3-4664-ba47-24fde9d92a2a/d2f65815-5f25-4740-ba56-76749a0f4571/images/d1db6e0e-87d0-4124-8326-b18bff3fdf90/*.meta;
do echo "# $n"; cat $n; echo; done
What can I do?
Thank you for your time!
Tibor
----- 2017. nov.. 16., 22:40, Nir Soffer <nsoffer(a)redhat.com> írta:
On Thu, Nov 16, 2017 at 7:11 PM Demeter Tibor <tdemeter(a)itsmart.hu> wrote:
> Hi,
> Sorry for my bad English:(
>
> Ovirt could not import my VM and disks from original storge-domain. It is
> an NFS share on a server.
> Then I create an another NFS share on same storage and attached to ovirt
> as new storgae domain.
> At this step I've created a new VM on new storage-domain with same disks.
> (three think provisioned, 100GB, 60GB and 13TB disks). It was necessary,
> because ovirt cant import my original disks and VM from the re-attached
> storage domain.
> Finally I renamed my old disks from old storage domain to new on new
> storgae domain (it was possible because are there on same file system on
> nfs server)
> At this moment only two virtual disk working fine. The 100 and 60GB disks
> does working fine, but the 13TB is not. It has a snapshot. Or not? Is it a
> snapshot or not?
>
> My problem is inside VM I see the content as raw disk. I thing Ovirt
> don't want to use it as a snapshotted image. Or what ?
>
Demeter, can you share the output of this command on a host running your
vm?
virsh -r dumpxml vm-name
Nir
>
> Thank you.
>
> Tibor
>
> ----- 2017. nov.. 16., 15:57, Benny Zlotnik <bzlotnik(a)redhat.com> írta:
>
> Hi Tibor,
> Can you please explain this part: "After this I just wondered, I will
> make a new VM with same disk and I will copy the images (really just
> rename) from original to recreated."
> What were the exact steps you took?
>
> Thanks
>
> On Thu, Nov 16, 2017 at 4:19 PM, Demeter Tibor <tdemeter(a)itsmart.hu>
> wrote:
>
>> Hi,
>>
>> Thank you for your reply.
>>
>> So. I have a disk with snapshot. Or - really - I just think that is a
>> snapshot. It was attached originally to a VM (with other two disks, that is
>> not have snapshot) I did a detach-attach-storage procedure, but after
>> attach, - I don't know why - ovirt could not import the VM and disks from
>> this (ovirt said it is not possible). After this I just wondered, I will
>> make a new VM with same disk and I will copy the images (really just
>> rename) from original to recreated.
>>
>> It was partial success because the VM can boot, but the disk, where
>> there is a snapshot I can't read the LVM table. I see just it seems to
>> corrupt.
>> Now this disk in a very interested state: I can see the snapshot datas
>> from the vm as raw disk.
>> I think Ovirt don't know that is a snapshotted image and attach to VM as
>> raw disk.
>>
>> So my really question, how can I add this disk image as good to ovirt?
>>
>> Please help me, it is very important me.:(
>>
>> Thanks in advance,
>>
>> Have a nice day,
>>
>> Tibor
>>
>>
>> ----- 2017. nov.. 16., 11:55, Ala Hino <ahino(a)redhat.com> írta:
>>
>> Hi Tibor,
>> I am not sure I completely understand the scenario.
>>
>> You have a VM with two disks and then you create a snapshot including
>> the two disks?
>> Before creating the snapshot, did the VM recognize the two disks?
>>
>> On Mon, Nov 13, 2017 at 10:36 PM, Demeter Tibor <tdemeter(a)itsmart.hu>
>> wrote:
>>
>>> Dear Users,
>>>
>>> I have a disk of a vm, that is have a snapshot. It is very interesting,
>>> because there are two other disk of that VM, but there are no snapshots of
>>> them.
>>> I found this while I've try to migrate a storage-domain between two
>>> datacenter.
>>> Because, I didn't import that vm from the storage domain, I did an
>>> another similar VM with exactly same sized thin-provisioned disks. I have
>>> renamed, copied to here my originals.
>>>
>>> The VM started successfully, but the disk that contain a snapshot did
>>> not recognized by the os. I can see the whole disk as raw. (disk id, format
>>> in ovirt, filenames of images, etc) . I think ovirt don't know that is a
>>> snapshotted image and use as raw. Is it possible?
>>> I don't see any snapshot in snapshots. Also I have try to list
>>> snapshots with qemu-img info and qemu-img snapshot -l , but it does not see
>>> any snapshots in the image.
>>>
>>> Really, I don't know how is possible this.
>>>
>>> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# qemu-img info
>>> 5974fd33-af4c-4e3b-aadb-bece6054eb6b
>>> image: 5974fd33-af4c-4e3b-aadb-bece6054eb6b
>>> file format: qcow2
>>> virtual size: 13T (13958643712000 bytes)
>>> disk size: 12T
>>> cluster_size: 65536
>>> backing file:
>>> ../8d815282-6957-41c0-bb3e-6c8f4a23a64b/723ad5aa-02f6-4067-ac75-0ce0a761627f
>>> backing file format: raw
>>> Format specific information:
>>> compat: 0.10
>>>
>>> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# qemu-img info
>>> 723ad5aa-02f6-4067-ac75-0ce0a761627f
>>> image: 723ad5aa-02f6-4067-ac75-0ce0a761627f
>>> file format: raw
>>> virtual size: 2.0T (2147483648000 bytes)
>>> disk size: 244G
>>>
>>> [root@storage1 8d815282-6957-41c0-bb3e-6c8f4a23a64b]# ll
>>> total 13096987560 <(309)%20698-7560>
>>> -rw-rw----. 1 36 36 13149448896512 Nov 13 13:42
>>> 5974fd33-af4c-4e3b-aadb-bece6054eb6b
>>> -rw-rw----. 1 36 36 1048576 Nov 13 19:34
>>> 5974fd33-af4c-4e3b-aadb-bece6054eb6b.lease
>>> -rw-r--r--. 1 36 36 262 Nov 13 19:54
>>> 5974fd33-af4c-4e3b-aadb-bece6054eb6b.meta
>>> -rw-rw----. 1 36 36 2147483648000 Jul 8 2016
>>> 723ad5aa-02f6-4067-ac75-0ce0a761627f
>>> -rw-rw----. 1 36 36 1048576 Jul 7 2016
>>> 723ad5aa-02f6-4067-ac75-0ce0a761627f.lease
>>> -rw-r--r--. 1 36 36 335 Nov 13 19:52
>>> 723ad5aa-02f6-4067-ac75-0ce0a761627f.meta
>>>
>>> qemu-img snapshot -l 5974fd33-af4c-4e3b-aadb-bece6054eb6b
>>>
>>> (nothing)
>>>
>>> Because it is a very big (13 TB) disk I can't migrate to an another
>>> image, because I don't have enough free space. So I just would like to
use
>>> it in ovirt like in the past.
>>>
>>> I have a very old ovirt (3.5)
>>>
>>> How can I use this disk?
>>>
>>> Thanks in advance,
>>>
>>> Regards,
>>>
>>> Tibor
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>