[ovirt-users] Unregistered disks - snapshot images present(Was Re: Import storage domain - disks not listed)

Sahina Bose sabose at redhat.com
Fri May 6 12:08:05 UTC 2016


Hi,

Back to Import Storage domain questions -

To ensure consistency, we snapshot the running VMs prior to replicating 
the gluster volume to a central site. The VMs have been setup such that 
OS disks are on a gluster volume called "vmstore" and non-OS disks are 
on a gluster volume called "data"
For back up , we are only interested in "data" - so only this storage 
domain is imported at backup recovery site.

Since the "data" storage domain does not contain all disks - VMs cannot 
be imported - that's expected.

To retrieve the backed up disks - since the disk has snapshot - this 
does not seem possible either. How do we work around this?

thanks!



On 05/03/2016 03:21 PM, Maor Lipchuk wrote:
> There is this bug https://bugzilla.redhat.com/1270562 which forces
> OVF_STORE update but it is not yet implemented.
>
> On Tue, May 3, 2016 at 8:52 AM, Sahina Bose <sabose at redhat.com> wrote:
>>
>> On 05/02/2016 09:36 PM, Maor Lipchuk wrote:
>>> On Mon, May 2, 2016 at 5:45 PM, Sahina Bose <sabose at redhat.com> wrote:
>>>>
>>>>
>>>> On 05/02/2016 05:57 PM, Maor Lipchuk wrote:
>>>>
>>>>
>>>>
>>>> On Mon, May 2, 2016 at 1:08 PM, Sahina Bose <sabose at redhat.com> wrote:
>>>>>
>>>>>
>>>>> On 05/02/2016 03:15 PM, Maor Lipchuk wrote:
>>>>>
>>>>>
>>>>>
>>>>> On Mon, May 2, 2016 at 12:29 PM, Sahina Bose <sabose at redhat.com> wrote:
>>>>>>
>>>>>>
>>>>>> On 05/01/2016 05:33 AM, Maor Lipchuk wrote:
>>>>>>
>>>>>> Hi Sahina,
>>>>>>
>>>>>> The disks with snapshots should be part of the VMs, once you will
>>>>>> register those VMs you should see those disks in the disks sub tab.
>>>>>>
>>>>>>
>>>>>> Maor,
>>>>>>
>>>>>> I was unable to import VM which prompted question - I assumed we had to
>>>>>> register disks first. So maybe I need to troubleshoot why I could not import
>>>>>> VMs from the domain first.
>>>>>> It fails with an error "Image does not exist". Where does it look for
>>>>>> volume IDs to pass to GetImageInfoVDSCommand - the OVF disk?
>>>>>>
>>>>>>
>>>>>> In engine.log
>>>>>>
>>>>>> 2016-05-02 04:15:14,812 ERROR
>>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
>>>>>> (ajp-/127.0.0.1:8702-1) [32f0b27c] Ir
>>>>>> sBroker::getImageInfo::Failed getting image info
>>>>>> imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c' does not exist on
>>>>>> domainName='sahinasl
>>>>>> ave', domainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7', error code:
>>>>>> 'VolumeDoesNotExist', message: Volume does not exist: (u'6f4da17a-0
>>>>>> 5a2-4d77-8091-d2fca3bbea1c',)
>>>>>> 2016-05-02 04:15:14,814 WARN
>>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
>>>>>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>>>>>> executeIrsBrokerCommand: getImageInfo on
>>>>>> '6f4da17a-05a2-4d77-8091-d2fca3bbea1c' threw an exception - assuming image
>>>>>> doesn't exist: IRS
>>>>>> GenericException: IRSErrorException: VolumeDoesNotExist
>>>>>> 2016-05-02 04:15:14,814 INFO
>>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.DoesImageExistVDSCommand]
>>>>>> (ajp-/127.0.0.1:8702-1) [32f0b27c]
>>>>>> FINISH, DoesImageExistVDSCommand, return: false, log id: 3366f39b
>>>>>> 2016-05-02 04:15:14,814 WARN
>>>>>> [org.ovirt.engine.core.bll.ImportVmFromConfigurationCommand]
>>>>>> (ajp-/127.0.0.1:8702-1) [32f0b27c] CanDoAction of action
>>>>>> 'ImportVmFromConfiguration' failed for user admin at internal. Reasons:
>>>>>> VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST
>>>>>>
>>>>>>
>>>>>>
>>>>>> jsonrpc.Executor/2::DEBUG::2016-05-02
>>>>>> 13:45:13,903::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
>>>>>> 'Volume.getInfo' in
>>>>>> bridge with {u'imageID': u'c52e4e02-dc6c-4a77-a184-9fcab88106c2',
>>>>>> u'storagepoolID': u'46ac4975-a84e-4e76-8e73-7971d0dadf0b', u'volumeI
>>>>>> D': u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c', u'storagedomainID':
>>>>>> u'5e1a37cf-933d-424c-8e3d-eb9e40b690a7'}
>>>>>>
>>>>>> jsonrpc.Executor/2::DEBUG::2016-05-02
>>>>>> 13:45:13,910::fileVolume::535::Storage.Volume::(validateVolumePath) validate
>>>>>> path for 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
>>>>>> jsonrpc.Executor/2::ERROR::2016-05-02
>>>>>> 13:45:13,914::task::866::Storage.TaskManager.Task::(_setError)
>>>>>> Task=`94dba16f-f7eb-439e-95e2-a04b34b92f84`::Unexpected error
>>>>>> Traceback (most recent call last):
>>>>>>     File "/usr/share/vdsm/storage/task.py", line 873, in _run
>>>>>>       return fn(*args, **kargs)
>>>>>>     File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
>>>>>>       res = f(*args, **kwargs)
>>>>>>     File "/usr/share/vdsm/storage/hsm.py", line 3162, in getVolumeInfo
>>>>>>       volUUID=volUUID).getInfo()
>>>>>>     File "/usr/share/vdsm/storage/sd.py", line 457, in produceVolume
>>>>>>       volUUID)
>>>>>>     File "/usr/share/vdsm/storage/glusterVolume.py", line 16, in
>>>>>> __init__
>>>>>>       volUUID)
>>>>>>     File "/usr/share/vdsm/storage/fileVolume.py", line 58, in __init__
>>>>>>       volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
>>>>>>     File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
>>>>>>       self.validate()
>>>>>>     File "/usr/share/vdsm/storage/volume.py", line 194, in validate
>>>>>>       self.validateVolumePath()
>>>>>>     File "/usr/share/vdsm/storage/fileVolume.py", line 540, in
>>>>>> validateVolumePath
>>>>>>       raise se.VolumeDoesNotExist(self.volUUID)
>>>>>> VolumeDoesNotExist: Volume does not exist:
>>>>>> (u'6f4da17a-05a2-4d77-8091-d2fca3bbea1c',)
>>>>>>
>>>>>> When I look at the tree output - there's no
>>>>>> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c file.
>>>>>>
>>>>>>
>>>>>> ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
>>>>>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
>>>>>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
>>>>>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
>>>>>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
>>>>>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
>>>>>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
>>>>>> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
>>>>>> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
>>>>>> │   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta
>>>>>
>>>>>
>>>>> Usually the "image does not exists" message is prompted once the VM's
>>>>> disk is managed in a different storage domain which were not imported yet.
>>>>>
>>>>> Few questions:
>>>>> 1. Were there any other Storage Domain which are not present in the
>>>>> setup?
>>>>>
>>>>>
>>>>> In the original RHEV instance - there were 3 storage domain
>>>>> i) Hosted engine storage domain: engine
>>>>> ii) Master data domain: vmstore
>>>>> iii) An export domain: expVol (no data here)
>>>>>
>>>>> To my backup RHEV server, I only imported vmstore.
>>>>
>>>> Just to be sure, can you look into the server which the engine storage
>>>> domain resides on.
>>>> Is there a chance that 6f4da17a-05a2-4d77-8091-d2fca3bbea1c could be
>>>> there?
>>>>
>>>> Also do you have an old engine log, is there a chance to look for image
>>>> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c in there, to see when it was created
>>>> and if there were any problems in the process.
>>>>
>>>>
>>>>
>>>> I checked the engine logs and it seems that image
>>>> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c was removed as part of snapshot merge.
>>>> Could it be that the OVF was not updated?
>>>
>>>
>>> The OVF_STORE is being update every 60 minutes or when a Storage
>>> Domain is moved to maintenance.
>>> If the DR process was done before the OVF_STORE got updated and the
>>> setup was destroyed then that might be the reason for the missing
>>> image in the OVF.
>>>
>>> The problem is that this image id is already part of the images chain
>>> configured in the VM's OVF.
>>> As a work around maybe you can create a new volume with VdsClient
>>> createVolume on the Host with the same UUID.
>>> or maybe to merge all the disk's snapshots through the vdsClient and
>>> once there will be only one volume you can register this disk as
>>> floating disk.
>>
>> Is there a way via API to update OVF_STORE?
>> This way before the DR process we can ensure that the OVF_STORE is referring
>> to the correct images.
>>
>>
>>>
>>>> engine.log-20160426.gz:2016-04-26 01:54:28,576 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
>>>> (pool-5-thread-1) [6f94943f] START, MergeVDSCommand(HostName = rhsdev9,
>>>> MergeVDSCommandParameters:{runAsync='true',
>>>> hostId='b9a662bf-e05e-4e6a-9dfe-ec1be76d48e7',
>>>> vmId='e2b89e45-6f99-465d-aa08-fc4f746f0dd0',
>>>> storagePoolId='00000001-0001-0001-0001-000000000305',
>>>> storageDomainId='5e1a37cf-933d-424c-8e3d-eb9e40b690a7',
>>>> imageGroupId='c52e4e02-dc6c-4a77-a184-9fcab88106c2',
>>>> imageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c',
>>>> baseImageId='90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa',
>>>> topImageId='6f4da17a-05a2-4d77-8091-d2fca3bbea1c', bandwidth='0'}), log id:
>>>> 5a65a864
>>>> engine.log-20160426.gz:2016-04-26 01:55:03,629 INFO
>>>> [org.ovirt.engine.core.bll.MergeCommandCallback]
>>>> (DefaultQuartzScheduler_Worker-57) [6f94943f] Merge command has completed
>>>> for images
>>>> '90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa'..'6f4da17a-05a2-4d77-8091-d2fca3bbea1c'
>>>> engine.log-20160426.gz:2016-04-26 01:55:06,707 INFO
>>>> [org.ovirt.engine.core.bll.MergeStatusCommand] (pool-5-thread-2) [3c72e9f8]
>>>> Successfully removed volume(s): [6f4da17a-05a2-4d77-8091-d2fca3bbea1c]
>>>>
>>>>
>>>>
>>>>> 2. Can you look for the image id 6f4da17a-05a2-4d77-8091-d2fca3bbea1c in
>>>>> your storage server (Search on all the rest of the storage domains)?
>>>>>
>>>>>
>>>>> No - there was no file with this id (checked both in the original RHEV
>>>>> instance)
>>>>> [root at dhcp42-105 mnt]# pwd
>>>>> /rhev/data-center/mnt
>>>>> [root at dhcp42-105 mnt]# find . -name
>>>>> 6f4da17a-05a2-4d77-8091-d2fca3bbea1c
>>>>> [root at dhcp42-105 mnt]#
>>>>>
>>>>>
>>>>> Were there any operations being done on the VM before the recovery such
>>>>> as remove disk, move disk, or a new creation of a disk?
>>>>>
>>>>>
>>>>> No. The only operation done was creation of a snapshot - and it
>>>>> completed before the recovery was attempted.
>>>>>
>>>>>
>>>>> Regards,
>>>>> Maor
>>>>>
>>>>>> Regarding floating disks (without snapshots), you can register them
>>>>>> through REST.
>>>>>> If you are working on the master branch there should be a sub tab
>>>>>> dedicated for those also.
>>>>>>
>>>>>> Regards,
>>>>>> Maor
>>>>>>
>>>>>> On Tue, Apr 26, 2016 at 1:44 PM, Sahina Bose <sabose at redhat.com> wrote:
>>>>>>> Hi all,
>>>>>>>
>>>>>>> I have a gluster volume used as data storage domain which is
>>>>>>> replicated to a slave gluster volume (say, slavevol) using gluster's
>>>>>>> geo-replication feature.
>>>>>>>
>>>>>>> Now, in a new oVirt instance, I use the import storage domain to
>>>>>>> import the slave gluster volume. The "VM Import" tab correctly lists the VMs
>>>>>>> that were present in my original gluster volume. However the "Disks" tab is
>>>>>>> empty.
>>>>>>>
>>>>>>> GET
>>>>>>> https://new-ovitt/api/storagedomains/5e1a37cf-933d-424c-8e3d-eb9e40b690a7/disks;unregistered
>>>>>>> -->
>>>>>>> <disks/>
>>>>>>>
>>>>>>>
>>>>>>> In the code GetUnregisteredDiskQuery - if volumesList.size() != 1 -
>>>>>>> the image is skipped with a comment that we can't deal with snapshots.
>>>>>>>
>>>>>>> How do I recover the disks/images in this case?
>>>>>>>
>>>>>>>
>>>>>>> Further info:
>>>>>>>
>>>>>>> /rhev/data-center/mnt/glusterSD/10.70.40.112:_slavevol
>>>>>>> ├── 5e1a37cf-933d-424c-8e3d-eb9e40b690a7
>>>>>>> │   ├── dom_md
>>>>>>> │   │   ├── ids
>>>>>>> │   │   ├── inbox
>>>>>>> │   │   ├── leases
>>>>>>> │   │   ├── metadata
>>>>>>> │   │   └── outbox
>>>>>>> │   ├── images
>>>>>>> │   │   ├── 202efaa6-0d01-40f3-a541-10eee920d221
>>>>>>> │   │   │   ├── eb701046-6ee1-4c9d-b097-e51a8fd283e1
>>>>>>> │   │   │   ├── eb701046-6ee1-4c9d-b097-e51a8fd283e1.lease
>>>>>>> │   │   │   └── eb701046-6ee1-4c9d-b097-e51a8fd283e1.meta
>>>>>>> │   │   ├── c52e4e02-dc6c-4a77-a184-9fcab88106c2
>>>>>>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659
>>>>>>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.lease
>>>>>>> │   │   │   ├── 34e46104-8fad-4510-a5bf-0730b97a6659.meta
>>>>>>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2
>>>>>>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.lease
>>>>>>> │   │   │   ├── 766a15b9-57db-417d-bfa0-beadbbb84ad2.meta
>>>>>>> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa
>>>>>>> │   │   │   ├── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.lease
>>>>>>> │   │   │   └── 90f1e26a-00e9-4ea5-9e92-2e448b9b8bfa.meta
>>>>>>> │   │   ├── c75de5b7-aa88-48d7-ba1b-067181eac6ae
>>>>>>> │   │   │   ├── ff09e16a-e8a0-452b-b95c-e160e68d09a9
>>>>>>> │   │   │   ├── ff09e16a-e8a0-452b-b95c-e160e68d09a9.lease
>>>>>>> │   │   │   └── ff09e16a-e8a0-452b-b95c-e160e68d09a9.meta
>>>>>>> │   │   ├── efa94a0d-c08e-4ad9-983b-4d1d76bca865
>>>>>>> │   │   │   ├── 64e3913c-da91-447c-8b69-1cff1f34e4b7
>>>>>>> │   │   │   ├── 64e3913c-da91-447c-8b69-1cff1f34e4b7.lease
>>>>>>> │   │   │   ├── 64e3913c-da91-447c-8b69-1cff1f34e4b7.meta
>>>>>>> │   │   │   ├── 8174e8b4-3605-4db3-86a1-cb62c3a079f4
>>>>>>> │   │   │   ├── 8174e8b4-3605-4db3-86a1-cb62c3a079f4.lease
>>>>>>> │   │   │   ├── 8174e8b4-3605-4db3-86a1-cb62c3a079f4.meta
>>>>>>> │   │   │   ├── e79a8821-bb4a-436a-902d-3876f107dd99
>>>>>>> │   │   │   ├── e79a8821-bb4a-436a-902d-3876f107dd99.lease
>>>>>>> │   │   │   └── e79a8821-bb4a-436a-902d-3876f107dd99.meta
>>>>>>> │   │   └── f5eacc6e-4f16-4aa5-99ad-53ac1cda75b7
>>>>>>> │   │       ├── 476bbfe9-1805-4c43-bde6-e7de5f7bd75d
>>>>>>> │   │       ├── 476bbfe9-1805-4c43-bde6-e7de5f7bd75d.lease
>>>>>>> │   │       └── 476bbfe9-1805-4c43-bde6-e7de5f7bd75d.meta
>>>>>>> │   └── master
>>>>>>> │       ├── tasks
>>>>>>> │       └── vms
>>>>>>> └── __DIRECT_IO_TEST__
>>>>>>>
>>>>>>> engine.log:
>>>>>>> 2016-04-26 06:37:57,715 INFO
>>>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
>>>>>>> (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] FINISH,
>>>>>>> GetImageInfoVDSCommand, return: org.ov
>>>>>>> irt.engine.core.common.businessentities.storage.DiskImage at d4b3ac2f,
>>>>>>> log id: 7b693bad
>>>>>>> 2016-04-26 06:37:57,724 INFO
>>>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVolumesListVDSCommand]
>>>>>>> (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] START,
>>>>>>> GetVolumesListVDSCommand( StoragePool
>>>>>>> DomainAndGroupIdBaseVDSCommandParameters:{runAsync='true',
>>>>>>> storagePoolId='ed338557-5995-4634-97e2-15454a9d8800',
>>>>>>> ignoreFailoverLimit='false',
>>>>>>> storageDomainId='5e1a37cf-933d-424c-8e3d-eb9e40b
>>>>>>> 690a7', imageGroupId='c52e4e02-dc6c-4a77-a184-9fcab88106c2'}), log id:
>>>>>>> 741b9214
>>>>>>> 2016-04-26 06:37:58,748 INFO
>>>>>>> [org.ovirt.engine.core.vdsbroker.irsbroker.GetVolumesListVDSCommand]
>>>>>>> (org.ovirt.thread.pool-6-thread-25) [5e6b7a53] FINISH,
>>>>>>> GetVolumesListVDSCommand, return: [9
>>>>>>> 0f1e26a-00e9-4ea5-9e92-2e448b9b8bfa,
>>>>>>> 766a15b9-57db-417d-bfa0-beadbbb84ad2, 34e46104-8fad-4510-a5bf-0730b97a6659],
>>>>>>> log id: 741b9214
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users at ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>




More information about the Users mailing list