<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, May 2, 2016 at 12:29 PM, Sahina Bose <span dir="ltr">&lt;<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
  
    
  
  <div text="#000000" bgcolor="#FFFFFF"><span class="">
    <br>
    <br>
    <div>On 05/01/2016 05:33 AM, Maor Lipchuk
      wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">Hi Sahina,
        <div><br>
        </div>
        <div>The disks with snapshots should be part of the VMs, once
          you will register those VMs you should see those disks in the
          disks sub tab.<br>
        </div>
      </div>
    </blockquote>
    <br></span>
    Maor,<br>
    <br>
    I was unable to import VM which prompted question - I assumed we had
    to register disks first. So maybe I need to troubleshoot why I could
    not import VMs from the domain first.<br>
    It fails with an error &quot;Image does not exist&quot;. Where does it look
    for volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?<br></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div text="#000000" bgcolor="#FFFFFF"><br>
    In engine.log<br>
    <br>
    2016-05-02 04:15:14,812 ERROR
    [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
    (ajp-/127.0.0.1:8702-1) [32f0b27c] Ir<br>
    sBroker::getImageInfo::Failed getting image info
    imageId=&#39;6f4da17a-05a2-4d77-8091-d2fca3bbea1c&#39; does not exist on
    domainName=&#39;sahinasl<br>
    ave&#39;, domainId=&#39;5e1a37cf-933d-424c-8e3d-eb9e40b690a7&#39;, error code:
    &#39;VolumeDoesNotExist&#39;, message: Volume does not exist: (u&#39;6f4da17a-0<br>
    5a2-4d77-8091-d2fca3bbea1c&#39;,)<br>
    2016-05-02 04:15:14,814 WARN 
    [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
    (ajp-/127.0.0.1:8702-1) [32f0b27c] <br>
    executeIrsBrokerCommand: getImageInfo on
    &#39;6f4da17a-05a2-4d77-8091-d2fca3bbea1c&#39; threw an exception - assuming
    image doesn&#39;t exist: IRS<br>
    GenericException: IRSErrorException: VolumeDoesNotExist<br>
    2016-05-02 04:15:14,814 INFO 
    [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
    (ajp-/127.0.0.1:8702-1) [32f0b27c] <br>
    FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b<br>
    2016-05-02 04:15:14,814 WARN 
    [org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand]
    (ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action
    &#39;ImportVmFromConfiguration&#39; failed for user admin@internal. Reasons:
VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST<br>
    <br>
    <br>
    <br>
    jsonrpc.Executor/2::DEBUG::2016-05-02
    13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
    Calling &#39;Volume.getInfo&#39; in <br>
    bridge with {u&#39;imageID&#39;: u&#39;c52e4e02-dc6c-4a77-a184-9fcab88106c2&#39;,
    u&#39;storagepoolID&#39;: u&#39;46ac4975-a84e-4e76-8e73-7971d0dadf0b&#39;, u&#39;volumeI<br>
    D&#39;: u&#39;6f4da17a-05a2-4d77-8091-d2fca3bbea1c&#39;, u&#39;storagedomainID&#39;:
    u&#39;5e1a37cf-933d-424c-8e3d-eb9e40b690a7&#39;}<br>
    <br>
    jsonrpc.Executor/2::DEBUG::2016-05-02
    13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath)
    validate path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c<br>
    jsonrpc.Executor/2::ERROR::2016-05-02
    13:45:13,914::task::866::Storage.TaskManager.Task::(_setError)
    Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error<br>
    Traceback (most recent call last):<br>
      File &quot;/usr/share/vdsm/storage/task.py&quot;, line 873, in _run<br>
        return fn(*args, **kargs)<br>
      File &quot;/usr/share/vdsm/logUtils.py&quot;, line 49, in wrapper<br>
        res = f(*args, **kwargs)<br>
      File &quot;/usr/share/vdsm/storage/hsm.py&quot;, line 3162, in getVolumeInfo<br>
        volUUID=volUUID).getInfo()<br>
      File &quot;/usr/share/vdsm/storage/sd.py&quot;, line 457, in produceVolume<br>
        volUUID)<br>
      File &quot;/usr/share/vdsm/storage/glusterVolume.py&quot;, line 16, in
    __init__<br>
        volUUID)<br>
      File &quot;/usr/share/vdsm/storage/fileVolume.py&quot;, line 58, in __init__<br>
        volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)<br>
      File &quot;/usr/share/vdsm/storage/volume.py&quot;, line 181, in __init__<br>
        self.validate()<br>
      File &quot;/usr/share/vdsm/storage/volume.py&quot;, line 194, in validate<br>
        self.validateVolumePath()<br>
      File &quot;/usr/share/vdsm/storage/fileVolume.py&quot;, line 540, in
    validateVolumePath<br>
        raise se.VolumeDoesNotExist(self.volUUID)<br>
    VolumeDoesNotExist: Volume does not exist:
    (u&#39;6f4da17a-05a2-4d77-8091-d2fca3bbea1c&#39;,)<br>
    <br>
    When I look at the tree output - there&#39;s no
    6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.<div><div class="h5"><br>
    <br>
    ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2<br>
    │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659<br>
    │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease<br>
    │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta<br>
    │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2<br>
    │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease<br>
    │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta<br>
    │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa<br>
    │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease<br>
    │   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta<br></div></div></div></blockquote><div><br></div><div><br></div><div>Usually the &quot;image does not exists&quot; message is prompted once the VM&#39;s disk is managed in a different storage domain which were not imported yet.</div><div><br></div><div>Few questions:</div><div>1. Were there any other Storage Domain which are not present in the setup?</div><div>2. Can you look for the image id 6f4da17a-05a2-4d77-8091-d2fca3bbea1c in your storage server (Search on all the rest of the storage domains)?</div><div>Were there any operations being done on the VM before the recovery such as remove disk, move disk, or a new creation of a disk?<br></div><div><br></div><div>Regards,</div><div>Maor</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div text="#000000" bgcolor="#FFFFFF"><div><div class="h5">
    <blockquote type="cite">
      <div dir="ltr">
        <div><br>
        </div>
        <div>Regarding floating disks (without snapshots), you can
          register them through REST.</div>
        <div>If you are working on the master branch there should be a
          sub tab dedicated for those also.</div>
        <div><br>
        </div>
        <div>Regards,</div>
        <div>Maor</div>
        <div>
          <div>
            <div class="gmail_extra"><br>
              <div class="gmail_quote">On Tue, Apr 26, 2016 at 1:44 PM,
                Sahina Bose <span dir="ltr">&lt;<a href="mailto:sabose@redhat.com" target="_blank"></a><a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>&gt;</span>
                wrote:<br>
                <blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi
                  all,<br>
                  <br>
                  I have a gluster volume used as data storage domain
                  which is replicated to a slave gluster volume (say,
                  slavevol) using gluster&#39;s geo-replication feature.<br>
                  <br>
                  Now, in a new oVirt instance, I use the import storage
                  domain to import the slave gluster volume. The &quot;VM
                  Import&quot; tab correctly lists the VMs that were present
                  in my original gluster volume. However the &quot;Disks&quot; tab
                  is empty.<br>
                  <br>
                  GET <a href="https://new-ovitt/api/storagedomains/5e1a37cf-933d-424c-8e3d-eb9e40b690a7/disks;unregistered" rel="noreferrer" target="_blank">https://new-ovitt/api/storagedomains/5e1a37cf-933d-424c-8e3d-eb9e40b690a7/disks;unregistered</a>
                  --&gt;<br>
                  &lt;disks/&gt;<br>
                  <br>
                  <br>
                  In the code GetUnregisteredDiskQuery - if
                  volumesList.size() != 1 - the image is skipped with a
                  comment that we can&#39;t deal with snapshots.<br>
                  <br>
                  How do I recover the disks/images in this case?<br>
                  <br>
                  <br>
                  Further info:<br>
                  <br>
                  /rhev/data-center/mnt/glusterSD/10.70.40.112:_slavevol<br>
                  ├── 5e1a37cf-933d-424c-8e3d-eb9e40b690a7<br>
                  │   ├── dom_md<br>
                  │   │   ├── ids<br>
                  │   │   ├── inbox<br>
                  │   │   ├── leases<br>
                  │   │   ├── metadata<br>
                  │   │   └── outbox<br>
                  │   ├── images<br>
                  │   │   ├── 202efaa6-0d01-40f3-a541-10eee920d221<br>
                  │   │   │   ├── eb701046-6ee1-4c9d-b097-e51a8fd283e1<br>
                  │   │   │   ├──
                  eb701046-6ee1-4c9d-b097-e51a8fd283e1.lease<br>
                  │   │   │   └──
                  eb701046-6ee1-4c9d-b097-e51a8fd283e1.meta<br>
                  │   │   ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2<br>
                  │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659<br>
                  │   │   │   ├──
                  34e46104-8fad-4510-a5bf-0730b97a6659.lease<br>
                  │   │   │   ├──
                  34e46104-8fad-4510-a5bf-0730b97a6659.meta<br>
                  │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2<br>
                  │   │   │   ├──
                  766a15b9-57db-417d-bfa0-beadbbb84ad2.lease<br>
                  │   │   │   ├──
                  766a15b9-57db-417d-bfa0-beadbbb84ad2.meta<br>
                  │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa<br>
                  │   │   │   ├──
                  90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease<br>
                  │   │   │   └──
                  90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta<br>
                  │   │   ├── c75de5b7-aa88-48d7-ba1b-067181eac6ae<br>
                  │   │   │   ├── ff09e16a-e8a0-452b-b95c-e160e68d09a9<br>
                  │   │   │   ├──
                  ff09e16a-e8a0-452b-b95c-e160e68d09a9.lease<br>
                  │   │   │   └──
                  ff09e16a-e8a0-452b-b95c-e160e68d09a9.meta<br>
                  │   │   ├── efa94a0d-c08e-4ad9-983b-4d1d76bca865<br>
                  │   │   │   ├── 64e3913c-da91-447c-8b69-1cff1f34e4b7<br>
                  │   │   │   ├──
                  64e3913c-da91-447c-8b69-1cff1f34e4b7.lease<br>
                  │   │   │   ├──
                  64e3913c-da91-447c-8b69-1cff1f34e4b7.meta<br>
                  │   │   │   ├── 8174e8b4-3605-4db3-86a1-cb62c3a079f4<br>
                  │   │   │   ├──
                  8174e8b4-3605-4db3-86a1-cb62c3a079f4.lease<br>
                  │   │   │   ├──
                  8174e8b4-3605-4db3-86a1-cb62c3a079f4.meta<br>
                  │   │   │   ├── e79a8821-bb4a-436a-902d-3876f107dd99<br>
                  │   │   │   ├──
                  e79a8821-bb4a-436a-902d-3876f107dd99.lease<br>
                  │   │   │   └──
                  e79a8821-bb4a-436a-902d-3876f107dd99.meta<br>
                  │   │   └── f5eacc6e-4f16-4aa5-99ad-53ac1cda75b7<br>
                  │   │       ├── 476bbfe9-1805-4c43-bde6-e7de5f7bd75d<br>
                  │   │       ├──
                  476bbfe9-1805-4c43-bde6-e7de5f7bd75d.lease<br>
                  │   │       └──
                  476bbfe9-1805-4c43-bde6-e7de5f7bd75d.meta<br>
                  │   └── master<br>
                  │       ├── tasks<br>
                  │       └── vms<br>
                  └── __DIRECT_IO_TEST__<br>
                  <br>
                  engine.log:<br>
                  2016-04-26 06:37:57,715 INFO
                  [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
                  (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] FINISH,
                  GetImageInfoVDSCommand, return: org.ov<br>
                  irt.engine.core.common.businessentities.storage.DiskImage@d4b3ac2f,
                  log id: 7b693bad<br>
                  2016-04-26 06:37:57,724 INFO
                  [org.ovirt.engine.core.vdsbroker.irsbroker.GetVolumesListVDSCommand]
                  (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] START,
                  GetVolumesListVDSCommand( StoragePool<br>
                  DomainAndGroupIdBaseVDSCommandParameters:{runAsync=&#39;true&#39;,
                  storagePoolId=&#39;ed338557-5995-4634-97e2-15454a9d8800&#39;,
                  ignoreFailoverLimit=&#39;false&#39;,
                  storageDomainId=&#39;5e1a37cf-933d-424c-8e3d-eb9e40b<br>
                  690a7&#39;,
                  imageGroupId=&#39;c52e4e02-dc6c-4a77-a184-9fcab88106c2&#39;}),
                  log id: 741b9214<br>
                  2016-04-26 06:37:58,748 INFO
                  [org.ovirt.engine.core.vdsbroker.irsbroker.GetVolumesListVDSCommand]
                  (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] FINISH,
                  GetVolumesListVDSCommand, return: [9<br>
                  0f1e26a-00e9-4ea5-9e92-2e448b9b8bfa,
                  766a15b9-57db-417d-bfa0-beadbbb84ad2,
                  34e46104-8fad-4510-a5bf-0730b97a6659], log id:
                  741b9214<br>
                  <br>
                  _______________________________________________<br>
                  Users mailing list<br>
                  <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
                  <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
                </blockquote>
              </div>
              <br>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br></div></div>