This is a multi-part message in MIME format.
--------------040009060905030707030803
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
On 05/02/2016 05:57 PM, Maor Lipchuk wrote:
On Mon, May 2, 2016 at 1:08 PM, Sahina Bose <sabose(a)redhat.com
<mailto:sabose@redhat.com>> wrote:
On 05/02/2016 03:15 PM, Maor Lipchuk wrote:
>
>
> On Mon, May 2, 2016 at 12:29 PM, Sahina Bose <sabose(a)redhat.com
> <mailto:sabose@redhat.com>> wrote:
>
>
>
> On 05/01/2016 05:33 AM, Maor Lipchuk wrote:
>> Hi Sahina,
>>
>> The disks with snapshots should be part of the VMs, once you
>> will register those VMs you should see those disks in the
>> disks sub tab.
>
> Maor,
>
> I was unable to import VM which prompted question - I assumed
> we had to register disks first. So maybe I need to
> troubleshoot why I could not import VMs from the domain first.
> It fails with an error "Image does not exist". Where does it
> look for volume IDs to pass to GetImageInfoVDSCommand - the
> OVF disk?
>
>
> In engine.log
>
> 2016-05-02 04:15:14,812 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
> sBroker::getImageInfo::Failed getting image info
> imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist
> on domainName='sahinasl
> ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error
> code: 'VolumeDoesNotExist', message: Volume does not exist:
> (u'6f4da17a-0
> 5a2-4d77-8091-d2fca3bbea1c',)
> 2016-05-02 04:15:14,814 WARN
> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c]
> executeIrsBrokerCommand: getImageInfo on
> '6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception -
> assuming image doesn't exist: IRS
> GenericException: IRSErrorException: VolumeDoesNotExist
> 2016-05-02 04:15:14,814 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c]
> FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
> 2016-05-02 04:15:14,814 WARN
> [org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand]
> (ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action
> 'ImportVmFromConfiguration' failed for user admin@internal.
> Reasons:
> VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
>
>
>
> jsonrpc.Executor/2::DEBUG::2016-05-02
> 13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'Volume.getInfo' in
> bridge with {u'imageID':
> u'c52e4e02-dc6c-4a77-a184-9fcab88106c2', u'storagepoolID':
> u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
> D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',
> u'storagedomainID': u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}
>
> jsonrpc.Executor/2::DEBUG::2016-05-02
> 13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath)
> validate path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
> jsonrpc.Executor/2::ERROR::2016-05-02
> 13:45:13,914::task::866::Storage.TaskManager.Task::(_setError)
Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected
> error
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/task.py", line 873, in _run
> return fn(*args, **kargs)
> File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/storage/hsm.py", line 3162, in
> getVolumeInfo
> volUUID=volUUID).getInfo()
> File "/usr/share/vdsm/storage/sd.py", line 457, in
> produceVolume
> volUUID)
> File "/usr/share/vdsm/storage/glusterVolume.py", line 16,
> in __init__
> volUUID)
> File "/usr/share/vdsm/storage/fileVolume.py", line 58, in
> __init__
> volume.Volume.__init__(self, repoPath, sdUUID, imgUUID,
> volUUID)
> File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
> self.validate()
> File "/usr/share/vdsm/storage/volume.py", line 194, in validate
> self.validateVolumePath()
> File "/usr/share/vdsm/storage/fileVolume.py", line 540, in
> validateVolumePath
> raise se.VolumeDoesNotExist(self.volUUID)
> VolumeDoesNotExist: Volume does not exist:
> (u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)
>
> When I look at the tree output - there's no
> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.
>
>
> ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
> │ │ │ ├── 34e46104-8fad-4510-a5bf-0730b97a6659
> │ │ │ ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
> │ │ │ ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
> │ │ │ ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
> │ │ │ ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
> │ │ │ ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
> │ │ │ ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
> │ │ │ ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
> │ │ │ └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta
>
>
>
> Usually the "image does not exists" message is prompted once the
> VM's disk is managed in a different storage domain which were not
> imported yet.
>
> Few questions:
> 1. Were there any other Storage Domain which are not present in
> the setup?
In the original RHEV instance - there were 3 storage domain
i) Hosted engine storage domain: engine
ii) Master data domain: vmstore
iii) An export domain: expVol (no data here)
To my backup RHEV server, I only imported vmstore.
Just to be sure, can you look into the server which the engine storage
domain resides on.
Is there a chance that 6f4da17a-05a2-4d77-8091-d2fca3bbea1c could be
there?
Also do you have an old engine log, is there a chance to look for
image 6f4da17a-05a2-4d77-8091-d2fca3bbea1c in there, to see when it
was created and if there were any problems in the process.
I checked the engine logs and it seems that image
6f4da17a-05a2-4d77-8091-d2fca3bbea1c was removed as part of snapshot
merge. Could it be that the OVF was not updated?
engine.log-20160426.gz:2016-04-26 01:54:28,576 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
(pool-5-thread-1) [6f94943f] START, MergeVDSCommand(HostName = rhsdev9,
MergeVDSCommandParameters:{runAsync='true',
hostId='b9a662bf-e05e-4e6a-9dfe-ec1be76d48e7',
vmId='e2b89e45-6f99-465d-aa08-fc4f746f0dd0',
storagePoolId='00000001-0001-0001-0001-000000000305',
storageDomainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7',
imageGroupId='c52e4e02-dc6c-4a77-a184-9fcab88106c2',
imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c',
baseImageId='90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa',
topImageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c', bandwidth='0'}), log
id: 5a65a864
engine.log-20160426.gz:2016-04-26 01:55:03,629 INFO
[org.ovirt.engine.core.bll.MergeCommandCallback]
(DefaultQuartzScheduler_Worker-57) [6f94943f] Merge command has
completed for images
'90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa'..'6f4da17a-05a2-4d77-8091-d2fca3bbea1c'
engine.log-20160426.gz:2016-04-26 01:55:06,707 INFO
[org.ovirt.engine.core.bll.MergeStatusCommand] (pool-5-thread-2)
[3c72e9f8] Successfully removed volume(s):
[6f4da17a-05a2-4d77-8091-d2fca3bbea1c]
> 2. Can you look for the image id
> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c in your storage server
> (Search on all the rest of the storage domains)?
No - there was no file with this id (checked both in the original
RHEV instance)
[root@dhcp42-105 mnt]# pwd
/rhev/data-center/mnt
[root@dhcp42-105 mnt]# find . -name
6f4da17a-05a2-4d77-8091-d2fca3bbea1c
[root@dhcp42-105 mnt]#
> Were there any operations being done on the VM before the
> recovery such as remove disk, move disk, or a new creation of a disk?
No. The only operation done was creation of a snapshot - and it
completed before the recovery was attempted.
>
> Regards,
> Maor
>
>>
>> Regarding floating disks (without snapshots), you can
>> register them through REST.
>> If you are working on the master branch there should be a
>> sub tab dedicated for those also.
>>
>> Regards,
>> Maor
>>
>> On Tue, Apr 26, 2016 at 1:44 PM, Sahina Bose
>> <sabose(a)redhat.com <mailto:sabose@redhat.com>> wrote:
>>
>> Hi all,
>>
>> I have a gluster volume used as data storage domain
>> which is replicated to a slave gluster volume (say,
>> slavevol) using gluster's geo-replication feature.
>>
>> Now, in a new oVirt instance, I use the import storage
>> domain to import the slave gluster volume. The "VM
>> Import" tab correctly lists the VMs that were present in
>> my original gluster volume. However the "Disks" tab is
>> empty.
>>
>> GET
>>
https://new-ovitt/api/storagedomains/5e1a37cf-933d-424c-8e3d-eb9e40b690a7...
>> -->
>> <disks/>
>>
>>
>> In the code GetUnregisteredDiskQuery - if
>> volumesList.size() != 1 - the image is skipped with a
>> comment that we can't deal with snapshots.
>>
>> How do I recover the disks/images in this case?
>>
>>
>> Further info:
>>
>> /rhev/data-center/mnt/glusterSD/10.70.40.112:_slavevol
>> ├── 5e1a37cf-933d-424c-8e3d-eb9e40b690a7
>> │ ├── dom_md
>> │ │ ├── ids
>> │ │ ├── inbox
>> │ │ ├── leases
>> │ │ ├── metadata
>> │ │ └── outbox
>> │ ├── images
>> │ │ ├── 202efaa6-0d01-40f3-a541-10eee920d221
>> │ │ │ ├── eb701046-6ee1-4c9d-b097-e51a8fd283e1
>> │ │ │ ├── eb701046-6ee1-4c9d-b097-e51a8fd283e1.lease
>> │ │ │ └── eb701046-6ee1-4c9d-b097-e51a8fd283e1.meta
>> │ │ ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
>> │ │ │ ├── 34e46104-8fad-4510-a5bf-0730b97a6659
>> │ │ │ ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
>> │ │ │ ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
>> │ │ │ ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
>> │ │ │ ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
>> │ │ │ ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
>> │ │ │ ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
>> │ │ │ ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
>> │ │ │ └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta
>> │ │ ├── c75de5b7-aa88-48d7-ba1b-067181eac6ae
>> │ │ │ ├── ff09e16a-e8a0-452b-b95c-e160e68d09a9
>> │ │ │ ├── ff09e16a-e8a0-452b-b95c-e160e68d09a9.lease
>> │ │ │ └── ff09e16a-e8a0-452b-b95c-e160e68d09a9.meta
>> │ │ ├── efa94a0d-c08e-4ad9-983b-4d1d76bca865
>> │ │ │ ├── 64e3913c-da91-447c-8b69-1cff1f34e4b7
>> │ │ │ ├── 64e3913c-da91-447c-8b69-1cff1f34e4b7.lease
>> │ │ │ ├── 64e3913c-da91-447c-8b69-1cff1f34e4b7.meta
>> │ │ │ ├── 8174e8b4-3605-4db3-86a1-cb62c3a079f4
>> │ │ │ ├── 8174e8b4-3605-4db3-86a1-cb62c3a079f4.lease
>> │ │ │ ├── 8174e8b4-3605-4db3-86a1-cb62c3a079f4.meta
>> │ │ │ ├── e79a8821-bb4a-436a-902d-3876f107dd99
>> │ │ │ ├── e79a8821-bb4a-436a-902d-3876f107dd99.lease
>> │ │ │ └── e79a8821-bb4a-436a-902d-3876f107dd99.meta
>> │ │ └── f5eacc6e-4f16-4aa5-99ad-53ac1cda75b7
>> │ │ ├── 476bbfe9-1805-4c43-bde6-e7de5f7bd75d
>> │ │ ├── 476bbfe9-1805-4c43-bde6-e7de5f7bd75d.lease
>> │ │ └── 476bbfe9-1805-4c43-bde6-e7de5f7bd75d.meta
>> │ └── master
>> │ ├── tasks
>> │ └── vms
>> └── __DIRECT_IO_TEST__
>>
>> engine.log:
>> 2016-04-26 06:37:57,715 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
>> (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] FINISH,
>> GetImageInfoVDSCommand, return: org.ov
>> irt.engine.core.common.businessentities.storage.DiskImage@d4b3ac2f,
>> log id: 7b693bad
>> 2016-04-26 06:37:57,724 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVolumesListVDSCommand]
>> (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] START,
>> GetVolumesListVDSCommand( StoragePool
>> DomainAndGroupIdBaseVDSCommandParameters:{runAsync='true',
>> storagePoolId='ed338557-5995-4634-97e2-15454a9d8800',
>> ignoreFailoverLimit='false',
>> storageDomainId='5e1a37cf-933d-424c-8e3d-eb9e40b
>> 690a7',
>> imageGroupId='c52e4e02-dc6c-4a77-a184-9fcab88106c2'}),
>> log id: 741b9214
>> 2016-04-26 06:37:58,748 INFO
>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVolumesListVDSCommand]
>> (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] FINISH,
>> GetVolumesListVDSCommand, return: [9
>> 0f1e26a-00e9-4ea5-9e92-2e448b9b8bfa,
>> 766a15b9-57db-417d-bfa0-beadbbb84ad2,
>> 34e46104-8fad-4510-a5bf-0730b97a6659], log id: 741b9214
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
--------------040009060905030707030803
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
<div class="moz-cite-prefix">On 05/02/2016 05:57 PM, Maor Lipchuk
wrote:<br>
</div>
<blockquote
cite="mid:CAJ1JNOee_wu04S83cnQwsJ0ZO9RiHqAMr7YwUv3Vz0Er-_jvOA@mail.gmail.com"
type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, May 2, 2016 at 1:08 PM,
Sahina Bose <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:sabose@redhat.com"
target="_blank">sabose(a)redhat.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>
<div> <br>
<br>
<div>On 05/02/2016 03:15 PM, Maor Lipchuk wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, May 2, 2016
at 12:29 PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:sabose@redhat.com"
target="_blank"><a
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com">sabose@redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"><span>
<br>
<br>
<div>On 05/01/2016 05:33 AM, Maor
Lipchuk wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>The disks with snapshots
should be part of the VMs, once
you will register those VMs you
should see those disks in the
disks sub tab.<br>
</div>
</div>
</blockquote>
<br>
</span> Maor,<br>
<br>
I was unable to import VM which prompted
question - I assumed we had to register
disks first. So maybe I need to
troubleshoot why I could not import VMs
from the domain first.<br>
It fails with an error "Image does not
exist". Where does it look for volume
IDs to pass to GetImageInfoVDSCommand -
the OVF disk?<br>
</div>
</blockquote>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"><br>
In engine.log<br>
<br>
2016-05-02 04:15:14,812 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c] Ir<br>
sBroker::getImageInfo::Failed getting
image info
imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c'
does not exist on domainName='sahinasl<br>
ave',
domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7',
error code: 'VolumeDoesNotExist',
message: Volume does not exist:
(u'6f4da17a-0<br>
5a2-4d77-8091-d2fca3bbea1c',)<br>
2016-05-02 04:15:14,814 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c] <br>
executeIrsBrokerCommand: getImageInfo on
'6f4da17a-05a2-4d77-8091-d2fca3bbea1c'
threw an exception - assuming image
doesn't exist: IRS<br>
GenericException: IRSErrorException:
VolumeDoesNotExist<br>
2016-05-02 04:15:14,814 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c] <br>
FINISH, DoesImageExistVDSCommand,
return: false, log id: 3366f39b<br>
2016-05-02 04:15:14,814 WARN
[org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand]
(ajp-/127.0.0.1:8702-1) [32f0b27c]
CanDoAction of action
'ImportVmFromConfiguration' failed for
user admin@internal. Reasons:
VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST<br>
<br>
<br>
<br>
jsonrpc.Executor/2::DEBUG::2016-05-02
13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'Volume.getInfo' in <br>
bridge with {u'imageID':
u'c52e4e02-dc6c-4a77-a184-9fcab88106c2',
u'storagepoolID':
u'46ac4975-a84e-4e76-8e73-7971d0dadf0b',
u'volumeI<br>
D':
u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',
u'storagedomainID':
u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}<br>
<br>
jsonrpc.Executor/2::DEBUG::2016-05-02
13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for
6f4da17a-05a2-4d77-8091-d2fca3bbea1c<br>
jsonrpc.Executor/2::ERROR::2016-05-02
13:45:13,914::task::866::Storage.TaskManager.Task::(_setError)
Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected
error<br>
Traceback (most recent call last):<br>
File
"/usr/share/vdsm/storage/task.py", line
873, in _run<br>
return fn(*args, **kargs)<br>
File "/usr/share/vdsm/logUtils.py",
line 49, in wrapper<br>
res = f(*args, **kwargs)<br>
File "/usr/share/vdsm/storage/hsm.py",
line 3162, in getVolumeInfo<br>
volUUID=volUUID).getInfo()<br>
File "/usr/share/vdsm/storage/sd.py",
line 457, in produceVolume<br>
volUUID)<br>
File
"/usr/share/vdsm/storage/glusterVolume.py",
line 16, in __init__<br>
volUUID)<br>
File
"/usr/share/vdsm/storage/fileVolume.py",
line 58, in __init__<br>
volume.Volume.__init__(self,
repoPath, sdUUID, imgUUID, volUUID)<br>
File
"/usr/share/vdsm/storage/volume.py",
line 181, in __init__<br>
self.validate()<br>
File
"/usr/share/vdsm/storage/volume.py",
line 194, in validate<br>
self.validateVolumePath()<br>
File
"/usr/share/vdsm/storage/fileVolume.py",
line 540, in validateVolumePath<br>
raise
se.VolumeDoesNotExist(self.volUUID)<br>
VolumeDoesNotExist: Volume does not
exist:
(u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)<br>
<br>
When I look at the tree output - there's
no 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
file.
<div>
<div><br>
<br>
├──
c52e4e02-dc6c-4a77-a184-9fcab88106c2<br>
│ │ │ ├──
34e46104-8fad-4510-a5bf-0730b97a6659<br>
│ │ │ ├──
34e46104-8fad-4510-a5bf-0730b97a6659.lease<br>
│ │ │ ├──
34e46104-8fad-4510-a5bf-0730b97a6659.meta<br>
│ │ │ ├──
766a15b9-57db-417d-bfa0-beadbbb84ad2<br>
│ │ │ ├──
766a15b9-57db-417d-bfa0-beadbbb84ad2.lease<br>
│ │ │ ├──
766a15b9-57db-417d-bfa0-beadbbb84ad2.meta<br>
│ │ │ ├──
90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa<br>
│ │ │ ├──
90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease<br>
│ │ │ └──
90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta<br>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div><br>
</div>
<div>Usually the "image does not exists"
message is prompted once the VM's disk is
managed in a different storage domain
which were not imported yet.</div>
<div><br>
</div>
<div>Few questions:</div>
<div>1. Were there any other Storage Domain
which are not present in the setup?</div>
</div>
</div>
</div>
</blockquote>
<br>
</div>
</div>
In the original RHEV instance - there were 3 storage
domain<br>
i) Hosted engine storage domain: engine<br>
ii) Master data domain: vmstore<br>
iii) An export domain: expVol (no data here)<br>
<br>
To my backup RHEV server, I only imported vmstore.</div>
</blockquote>
<div><br>
</div>
<div>Just to be sure, can you look into the server which the
engine storage domain resides on.</div>
<div>Is there a chance that
6f4da17a-05a2-4d77-8091-d2fca3bbea1c could be there?</div>
<div><br>
</div>
<div>Also do you have an old engine log, is there a chance
to look for image 6f4da17a-05a2-4d77-8091-d2fca3bbea1c in
there, to see when it was created and if there were any
problems in the process.</div>
<div> </div>
</div>
</div>
</div>
</blockquote>
<br>
I checked the engine logs and it seems that image
6f4da17a-05a2-4d77-8091-d2fca3bbea1c was removed as part of snapshot
merge. Could it be that the OVF was not updated?<br>
<br>
engine.log-20160426.gz:2016-04-26 01:54:28,576 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
(pool-5-thread-1) [6f94943f] START, MergeVDSCommand(HostName =
rhsdev9, MergeVDSCommandParameters:{runAsync='true',
hostId='b9a662bf-e05e-4e6a-9dfe-ec1be76d48e7',
vmId='e2b89e45-6f99-465d-aa08-fc4f746f0dd0',
storagePoolId='00000001-0001-0001-0001-000000000305',
storageDomainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7',
imageGroupId='c52e4e02-dc6c-4a77-a184-9fcab88106c2',
imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c',
baseImageId='90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa',
topImageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c', bandwidth='0'}),
log id: 5a65a864<br>
engine.log-20160426.gz:2016-04-26 01:55:03,629 INFO
[org.ovirt.engine.core.bll.MergeCommandCallback]
(DefaultQuartzScheduler_Worker-57) [6f94943f] Merge command has
completed for images
'90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa'..'6f4da17a-05a2-4d77-8091-d2fca3bbea1c'<br>
engine.log-20160426.gz:2016-04-26 01:55:06,707 INFO
[org.ovirt.engine.core.bll.MergeStatusCommand] (pool-5-thread-2)
[3c72e9f8] Successfully removed volume(s):
[6f4da17a-05a2-4d77-8091-d2fca3bbea1c]<br>
<br>
<br>
<blockquote
cite="mid:CAJ1JNOee_wu04S83cnQwsJ0ZO9RiHqAMr7YwUv3Vz0Er-_jvOA@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"><span><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div>2. Can you look for the image id
6f4da17a-05a2-4d77-8091-d2fca3bbea1c in your
storage server (Search on all the rest of
the storage domains)?</div>
</div>
</div>
</div>
</blockquote>
<br>
</span> No - there was no file with this id (checked
both in the original RHEV instance)<br>
[root@dhcp42-105 mnt]# pwd<br>
/rhev/data-center/mnt<br>
[root@dhcp42-105 mnt]# find . -name
6f4da17a-05a2-4d77-8091-d2fca3bbea1c<br>
[root@dhcp42-105 mnt]# </div>
</blockquote>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><span>
<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div>Were there any operations being done on
the VM before the recovery such as remove
disk, move disk, or a new creation of a
disk?<br>
</div>
</div>
</div>
</div>
</blockquote>
<br>
</span> No. The only operation done was creation of a
snapshot - and it completed before the recovery was
attempted.
<div>
<div><br>
</div>
</div>
</div>
</blockquote>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>
<div>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div><br>
</div>
<div>Regards,</div>
<div>Maor</div>
<div> </div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF">
<div>
<div>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>Regarding floating disks
(without snapshots), you can
register them through REST.</div>
<div>If you are working on the
master branch there should be
a sub tab dedicated for those
also.</div>
<div><br>
</div>
<div>Regards,</div>
<div>Maor</div>
<div>
<div>
<div
class="gmail_extra"><br>
<div class="gmail_quote">On
Tue, Apr 26, 2016 at
1:44 PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
href="mailto:sabose@redhat.com" target="_blank"><a
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com">sabose@redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0px 0px
0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi
all,<br>
<br>
I have a gluster
volume used as data
storage domain which
is replicated to a
slave gluster volume
(say, slavevol) using
gluster's
geo-replication
feature.<br>
<br>
Now, in a new oVirt
instance, I use the
import storage domain
to import the slave
gluster volume. The
"VM Import" tab
correctly lists the
VMs that were present
in my original gluster
volume. However the
"Disks" tab is
empty.<br>
<br>
GET <a
moz-do-not-send="true"
href="https://new-ovitt/api/storagedomains/5e1a37cf-933d-424c-8e3d-eb9e40b690a7/disks;unregistered"
rel="noreferrer"
target="_blank"><a
class="moz-txt-link-freetext"
href="https://new-ovitt/api/storagedomains/5e1a37cf-933d-424c-8e3d-eb9e40b690a7/disks;unregistered">https://new-ovitt/api/storagedomains/5e1a37cf-933d-424c-8e3d-eb9e40b690a7/disks;unregistered</a></a>
--><br>
<disks/><br>
<br>
<br>
In the code
GetUnregisteredDiskQuery
- if
volumesList.size() !=
1 - the image is
skipped with a comment
that we can't deal
with snapshots.<br>
<br>
How do I recover the
disks/images in this
case?<br>
<br>
<br>
Further info:<br>
<br>
/rhev/data-center/mnt/glusterSD/10.70.40.112:_slavevol<br>
├──
5e1a37cf-933d-424c-8e3d-eb9e40b690a7<br>
│ ├── dom_md<br>
│ │ ├── ids<br>
│ │ ├── inbox<br>
│ │ ├── leases<br>
│ │ ├── metadata<br>
│ │ └── outbox<br>
│ ├── images<br>
│ │ ├──
202efaa6-0d01-40f3-a541-10eee920d221<br>
│ │ │ ├──
eb701046-6ee1-4c9d-b097-e51a8fd283e1<br>
│ │ │ ├──
eb701046-6ee1-4c9d-b097-e51a8fd283e1.lease<br>
│ │ │ └──
eb701046-6ee1-4c9d-b097-e51a8fd283e1.meta<br>
│ │ ├──
c52e4e02-dc6c-4a77-a184-9fcab88106c2<br>
│ │ │ ├──
34e46104-8fad-4510-a5bf-0730b97a6659<br>
│ │ │ ├──
34e46104-8fad-4510-a5bf-0730b97a6659.lease<br>
│ │ │ ├──
34e46104-8fad-4510-a5bf-0730b97a6659.meta<br>
│ │ │ ├──
766a15b9-57db-417d-bfa0-beadbbb84ad2<br>
│ │ │ ├──
766a15b9-57db-417d-bfa0-beadbbb84ad2.lease<br>
│ │ │ ├──
766a15b9-57db-417d-bfa0-beadbbb84ad2.meta<br>
│ │ │ ├──
90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa<br>
│ │ │ ├──
90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease<br>
│ │ │ └──
90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta<br>
│ │ ├──
c75de5b7-aa88-48d7-ba1b-067181eac6ae<br>
│ │ │ ├──
ff09e16a-e8a0-452b-b95c-e160e68d09a9<br>
│ │ │ ├──
ff09e16a-e8a0-452b-b95c-e160e68d09a9.lease<br>
│ │ │ └──
ff09e16a-e8a0-452b-b95c-e160e68d09a9.meta<br>
│ │ ├──
efa94a0d-c08e-4ad9-983b-4d1d76bca865<br>
│ │ │ ├──
64e3913c-da91-447c-8b69-1cff1f34e4b7<br>
│ │ │ ├──
64e3913c-da91-447c-8b69-1cff1f34e4b7.lease<br>
│ │ │ ├──
64e3913c-da91-447c-8b69-1cff1f34e4b7.meta<br>
│ │ │ ├──
8174e8b4-3605-4db3-86a1-cb62c3a079f4<br>
│ │ │ ├──
8174e8b4-3605-4db3-86a1-cb62c3a079f4.lease<br>
│ │ │ ├──
8174e8b4-3605-4db3-86a1-cb62c3a079f4.meta<br>
│ │ │ ├──
e79a8821-bb4a-436a-902d-3876f107dd99<br>
│ │ │ ├──
e79a8821-bb4a-436a-902d-3876f107dd99.lease<br>
│ │ │ └──
e79a8821-bb4a-436a-902d-3876f107dd99.meta<br>
│ │ └──
f5eacc6e-4f16-4aa5-99ad-53ac1cda75b7<br>
│ │ ├──
476bbfe9-1805-4c43-bde6-e7de5f7bd75d<br>
│ │ ├──
476bbfe9-1805-4c43-bde6-e7de5f7bd75d.lease<br>
│ │ └──
476bbfe9-1805-4c43-bde6-e7de5f7bd75d.meta<br>
│ └── master<br>
│ ├── tasks<br>
│ └── vms<br>
└── __DIRECT_IO_TEST__<br>
<br>
engine.log:<br>
2016-04-26
06:37:57,715 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-25)
[5e6b7a53] FINISH,
GetImageInfoVDSCommand,
return: org.ov<br>
irt.engine.core.common.businessentities.storage.DiskImage@d4b3ac2f,
log id: 7b693bad<br>
2016-04-26
06:37:57,724 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVolumesListVDSCommand]
(org.ovirt.thread.pool-6-thread-25)
[5e6b7a53] START,
GetVolumesListVDSCommand(
StoragePool<br>
DomainAndGroupIdBaseVDSCommandParameters:{runAsync='true',
storagePoolId='ed338557-5995-4634-97e2-15454a9d8800',
ignoreFailoverLimit='false',
storageDomainId='5e1a37cf-933d-424c-8e3d-eb9e40b<br>
690a7',
imageGroupId='c52e4e02-dc6c-4a77-a184-9fcab88106c2'}),
log id: 741b9214<br>
2016-04-26
06:37:58,748 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetVolumesListVDSCommand]
(org.ovirt.thread.pool-6-thread-25)
[5e6b7a53] FINISH,
GetVolumesListVDSCommand,
return: [9<br>
0f1e26a-00e9-4ea5-9e92-2e448b9b8bfa,
766a15b9-57db-417d-bfa0-beadbbb84ad2,
34e46104-8fad-4510-a5bf-0730b97a6659],
log id: 741b9214<br>
<br>
_______________________________________________<br>
Users mailing list<br>
<a
moz-do-not-send="true"
href="mailto:Users@ovirt.org" target="_blank"><a
class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a></a><br>
<a
moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer"
target="_blank"><a
class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</body>
</html>
--------------040009060905030707030803--