[ovirt-users] moving storage and importing vms issue

Jiří Sléžka jiri.slezka at slu.cz
Tue Oct 6 21:29:02 UTC 2015


Hello,

I have not much time to deal with this issue until today. I happily 
recovered lost disk image.

As I mentioned before, I found lost disk image (volume) but it wasn't 
accessible because it's logical volume (I'm using FC storage) wasn't 
active...

I double check that this image is not used anywhere and set it active

lvchange -a y 
/dev/088e7ed9-84c7-4fbd-a570-f37fa986a772/0681822f-3ac8-473b-95ce-380f8ab4de06

then I was able to backup my data (in fact I created new vm disk and dd 
this volume to it)

btw. After successful recovery I still have this orphaned image on my 
storage which is not visible from manager. How can I correctly remove 
it? (from storage and from engine)

It looks like this (http://www.ovirt.org/Features/Orphaned_Images) 
utility would helped a lot ;-)

Cheers, Jiri


>
> any hope and/or hint for me?
>
> Before I moved storage I (partly live) migrated disks to this storage
> (we have about 5 LUNS). Probably there could be some issues. Just a
> guess - could it mean that some disks would stay on original storage as
> orphaned images?
>
> It would be useful to have some low-level util to display all images
> (also orphaned) and their properties and correlations with vms.
>
> Kind regards,
>
> Jiri
>
>
>
> Dne 10.9.2015 v 16:31 Eli Mesika napsal(a):
>> Adding Allon M
>>
>> ----- Original Message -----
>>> From: "Jiří Sléžka" <jiri.slezka at slu.cz>
>>> To: "Eli Mesika" <emesika at redhat.com>
>>> Cc: users at ovirt.org, "Omer Frenkel" <ofrenkel at redhat.com>
>>> Sent: Thursday, September 10, 2015 4:07:48 PM
>>> Subject: Re: [ovirt-users] moving storage and importing vms issue
>>>
>>> Hello,
>>>
>>>> ----- Original Message -----
>>>>> From: "Jiří Sléžka" <jiri.slezka at slu.cz>
>>>>> To: emesika at redhat.com
>>>>> Cc: users at ovirt.org
>>>>> Sent: Thursday, September 10, 2015 1:50:14 PM
>>>>> Subject: Re: [ovirt-users] moving storage and importing vms issue
>>>>>
>>>>> Hello,
>>>>>
>>>>>> ----- Original Message -----
>>>>>>> From: "Jiří Sléžka" <jiri.slezka at slu.cz>
>>>>>>> To: users at ovirt.org
>>>>>>> Sent: Thursday, September 10, 2015 1:30:29 AM
>>>>>>> Subject: [ovirt-users] moving storage and importing vms issue
>>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> I am working on some consolidation of our RHEV/oVirt servers and
>>>>>>> I moved
>>>>>>> one storage to new oVirt datacenter (put it into maintenance,
>>>>>>> detached
>>>>>>> it from old and imported into new datacenter) which worked pretty
>>>>>>> good.
>>>>>>>
>>>>>>> Then I tried to import all the vms which worked also great except
>>>>>>> for
>>>>>>> three of them.
>>>>>>>
>>>>>>> These vms are stucked in VM Import sub-tab and are quietly failing
>>>>>>> import attempts (I can only see failed task "Importing VM
>>>>>>> clavius-winxp
>>>>>>> from configuration to Cluster CIT-oVirt" but no related event and/or
>>>>>>> explanation)
>>>>>>>
>>>>>>> There is only one host in this datacenter/cluster which is SPM. I
>>>>>>> can't
>>>>>>> find anything interesting in vdsm.log (short span of import time
>>>>>>> is in
>>>>>>> attachment).
>>>>>>
>>>>>> Can you please attach also engine.log ?
>>>>>
>>>>> sure
>>>>>
>>>>> well, here I can see an error... it looks like some db and/or snapshot
>>>>> issue.
>>>>
>>>> Yes, seems as ImportVmFromConfigurationCommand tries to add
>>>> snapshots with
>>>> the empty GUID (000......0)
>>>> This cause violation of the primary key of the snapshots table
>>>> CCing Omer F on that
>>>>
>>>>>
>>>>> well and it looks like I lost also one secondary disk from one
>>>>> correctly
>>>>> imported vm.
>>>>>
>>>>> is there a way to show all images on some storage domain?
>>>>>
>>>>> I found that my storage is this
>>>>>
>>>>> [root at ovirt04 ~]# vdsClient -s 0 getStorageDomainInfo
>>>>> 088e7ed9-84c7-4fbd-a570-f37fa986a772
>>>>>     uuid = 088e7ed9-84c7-4fbd-a570-f37fa986a772
>>>>>     vguuid = MkMpr6-o9c1-LBUq-rZ0E-ZRSg-X31T-2aU1PV
>>>>>     state = OK
>>>>>     version = 3
>>>>>     role = Master
>>>>>     type = FCP
>>>>>     class = Data
>>>>>     pool = ['00000002-0002-0002-0002-0000000002b9']
>>>>>     name = oVirt-SlowStorage
>>>>>
>>>>> but I have no luck with finding how to display all images on it.
>>>>
>>>> try
>>>>
>>>> # vdsClient -s getImagesList "088e7ed9-84c7-4fbd-a570-f37fa986a772"
>>>
>>> yes, it works :-)
>>>
>>> now I have list of imgUUIDs on this storage. When I compare it against
>>> Disks tab in oVirt manager a see 5 images that are not visible in
>>> manager.
>>>
>>> 346ad5af-9db8-46eb-9a45-172ce3213496
>>> 45493042-67f5-4dcd-8dae-5b2c213aa95a
>>> fb8f3165-5976-4094-9d37-ea0b09124547
>>> e15288bc-30ec-4a77-837b-bdc7de37a08b
>>> be5c56de-6a22-4d1a-8579-f0f5d501d90c
>>>
>>> now I tried to find anything about these images
>>>
>>> [root at ovirt04 ~]# vdsClient -s 0 getVolumesList
>>> "088e7ed9-84c7-4fbd-a570-f37fa986a772"
>>> "00000002-0002-0002-0002-0000000002b9"
>>> "346ad5af-9db8-46eb-9a45-172ce3213496"
>>> eeca0e49-ba6d-4b4b-9eb4-731b90b48091 : Exported by virt-v2v.
>>> da00feb8-991d-4b91-b424-6931daf00c83 : Parent is
>>> eeca0e49-ba6d-4b4b-9eb4-731b90b48091
>>>
>>> ----
>>>
>>> [root at ovirt04 ~]# vdsClient -s 0 getVolumesList
>>> "088e7ed9-84c7-4fbd-a570-f37fa986a772"
>>> "00000002-0002-0002-0002-0000000002b9"
>>> "45493042-67f5-4dcd-8dae-5b2c213aa95a"
>>>
>>> d2916b5d-50e4-482c-aa6b-e26d2c78ef46 : Exported by virt-v2v.
>>>
>>> ----
>>>
>>> [root at ovirt04 ~]# vdsClient -s 0 getVolumesList
>>> "088e7ed9-84c7-4fbd-a570-f37fa986a772"
>>> "00000002-0002-0002-0002-0000000002b9"
>>> "fb8f3165-5976-4094-9d37-ea0b09124547"
>>> cc83caa4-e366-4fd6-94b7-d16089aa29d6 : Parent is
>>> 53c5003d-80de-4dfd-b5d8-50537a3a54d6
>>>
>>> 53c5003d-80de-4dfd-b5d8-50537a3a54d6 : imported by virt-v2v.
>>>
>>> ----
>>>
>>> [root at ovirt04 ~]# vdsClient -s 0 getVolumesList
>>> "088e7ed9-84c7-4fbd-a570-f37fa986a772"
>>> "00000002-0002-0002-0002-0000000002b9"
>>> "e15288bc-30ec-4a77-837b-bdc7de37a08b"
>>>
>>> 2f2c2a1c-6dcc-436c-962c-00e4e074a39a :
>>> {"DiskAlias":"polymatheia1.slu.cz_Disk1","DiskDescription":""}.
>>>
>>> ----
>>>
>>> [root at ovirt04 ~]# vdsClient -s 0 getVolumesList
>>> "088e7ed9-84c7-4fbd-a570-f37fa986a772"
>>> "00000002-0002-0002-0002-0000000002b9"
>>> "be5c56de-6a22-4d1a-8579-f0f5d501d90c"
>>>
>>> 0681822f-3ac8-473b-95ce-380f8ab4de06 :
>>>
>>> ----
>>>
>>> when I look on last case
>>>
>>> [root at ovirt04 ~]# vdsClient -s 0 getVolumeInfo
>>> "088e7ed9-84c7-4fbd-a570-f37fa986a772"
>>> "00000002-0002-0002-0002-0000000002b9"
>>> "be5c56de-6a22-4d1a-8579-f0f5d501d90c"
>>> "0681822f-3ac8-473b-95ce-380f8ab4de06"
>>>     status = OK
>>>     domain = 088e7ed9-84c7-4fbd-a570-f37fa986a772
>>>     capacity = 322122547200
>>>     voltype = LEAF
>>>     description =
>>>     parent = 00000000-0000-0000-0000-000000000000
>>>     format = RAW
>>>     image = be5c56de-6a22-4d1a-8579-f0f5d501d90c
>>>     uuid = 0681822f-3ac8-473b-95ce-380f8ab4de06
>>>     disktype = 2
>>>     legality = LEGAL
>>>     mtime = 0
>>>     apparentsize = 322122547200
>>>     truesize = 322122547200
>>>     type = PREALLOCATED
>>>     children = []
>>>     pool =
>>>     ctime = 1440611370
>>>
>>> [root at ovirt04 ~]# vdsClient -s 0 getVolumeSize
>>> "088e7ed9-84c7-4fbd-a570-f37fa986a772"
>>> "00000002-0002-0002-0002-0000000002b9"
>>> "be5c56de-6a22-4d1a-8579-f0f5d501d90c"
>>> "0681822f-3ac8-473b-95ce-380f8ab4de06"
>>>     apparentsize = '322122547200'
>>>     truesize = '322122547200'
>>>
>>> [root at ovirt04 ~]# vdsClient -s 0 getVolumePath
>>> "088e7ed9-84c7-4fbd-a570-f37fa986a772"
>>> "00000002-0002-0002-0002-0000000002b9"
>>> "be5c56de-6a22-4d1a-8579-f0f5d501d90c"
>>> "0681822f-3ac8-473b-95ce-380f8ab4de06"
>>> /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/be5c56de-6a22-4d1a-8579-f0f5d501d90c/0681822f-3ac8-473b-95ce-380f8ab4de06
>>>
>>>
>>> [root at ovirt04 ~]# ll
>>> /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/be5c56de-6a22-4d1a-8579-f0f5d501d90c/0681822f-3ac8-473b-95ce-380f8ab4de06
>>>
>>>
>>>
>>> lrwxrwxrwx. 1 vdsm kvm 78 Sep  9 22:36
>>> /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/be5c56de-6a22-4d1a-8579-f0f5d501d90c/0681822f-3ac8-473b-95ce-380f8ab4de06
>>>
>>> ->
>>> /dev/088e7ed9-84c7-4fbd-a570-f37fa986a772/0681822f-3ac8-473b-95ce-380f8ab4de06
>>>
>>>
>>> but
>>> /dev/088e7ed9-84c7-4fbd-a570-f37fa986a772/0681822f-3ac8-473b-95ce-380f8ab4de06
>>>
>>> seems to not exists
>>>
>>>
>>> is there any chance to recover this disks?
>>>
>>> Thanks,
>>>
>>> Jiri
>>>
>>>
>>>
>>>
>>>>
>>>>
>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Jiri
>>>>>
>>>>>
>>>>>>
>>>>>>>
>>>>>>> Could you point me where should I look, please?
>>>>>>>
>>>>>>> Storage (FC) was formerly attached to RHEV3.5.3 on RHEL6.7 and was
>>>>>>> imported into oVirt3.5.4 on CentOS7.1
>>>>>>>
>>>>>>> Thanks in advance,
>>>>>>>
>>>>>>> Jiri Slezka
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users at ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>
>>>>>
>>>>>
>>>
>>>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>


-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3267 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151006/f27c2453/attachment-0001.p7s>


More information about the Users mailing list