Re: Failed to update VMs/Templates OVF data, cannot change SPM
by Albl, Oliver
Hi all,
does anybody have an idea how to address this? There is also a side effect that I cannot change SPM ("Error while executing action: Cannot force select SPM. The Storage Pool has running tasks.").
All the best,
Oliver
Von: Albl, Oliver
Gesendet: Mittwoch, 13. Juni 2018 12:32
An: users(a)ovirt.org
Betreff: Failed to update VMs/Templates OVF data
Hi,
I have a FC storage domain reporting the following messages every hour:
VDSM command SetVolumeDescriptionVDS failed: Could not acquire resource. Probably resource factory threw an exception.: ()
Failed to update OVF disks cb04b55c-10fb-46fe-b9de-3c133a94e6a5, OVF data isn't updated on those OVF stores (Data Center VMTEST, Storage Domain VMHOST_LUN_62).
Failed to update VMs/Templates OVF data for Storage Domain VMHOST_LUN_62 in Data Center VMTEST.
Trying to manually update OVF results in "Error while executing action UpdateOvfStoreForStorageDomain: Internal Engine Error"
I run oVirt 4.2.3.5-1.el7.centos on CentOS 7.5 (3.10.0-862.3.2.el7.x86_64) with vdsm-4.20.27.1-1.el7.centos.x86_64
Engine log:
2018-06-13 12:15:35,649+02 WARN [org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default task-12) [092d8f27-c0a4-4d78-a8cb-f4738aff71e6] The message key 'UpdateOvfStoreForStorageDomain' is missing from 'bundles/ExecutionMessages'
2018-06-13 12:15:35,655+02 INFO [org.ovirt.engine.core.bll.storage.domain.UpdateOvfStoreForStorageDomainCommand] (default task-12) [092d8f27-c0a4-4d78-a8cb-f4738aff71e6] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3edb5295-3797-4cd0-9b43-f46ec1ee7b14=OVF_UPDATE, 373efd46-8aea-4d0e-96cc-1da0debf72d0=STORAGE]', sharedLocks=''}'
2018-06-13 12:15:35,660+02 INFO [org.ovirt.engine.core.bll.storage.domain.UpdateOvfStoreForStorageDomainCommand] (default task-12) [092d8f27-c0a4-4d78-a8cb-f4738aff71e6] Running command: UpdateOvfStoreForStorageDomainCommand internal: false. Entities affected : ID: 373efd46-8aea-4d0e-96cc-1da0debf72d0 Type: StorageAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2018-06-13 12:15:35,670+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (default task-12) [4fd5b59a] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected : ID: 3edb5295-3797-4cd0-9b43-f46ec1ee7b14 Type: StoragePool
2018-06-13 12:15:35,674+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (default task-12) [4fd5b59a] Attempting to update VM OVFs in Data Center 'VMTEST'
2018-06-13 12:15:35,678+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (default task-12) [4fd5b59a] Successfully updated VM OVFs in Data Center 'VMTEST'
2018-06-13 12:15:35,678+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (default task-12) [4fd5b59a] Attempting to update template OVFs in Data Center 'VMTEST'
2018-06-13 12:15:35,678+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (default task-12) [4fd5b59a] Successfully updated templates OVFs in Data Center 'VMTEST'
2018-06-13 12:15:35,678+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (default task-12) [4fd5b59a] Attempting to remove unneeded template/vm OVFs in Data Center 'VMTEST'
2018-06-13 12:15:35,680+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (default task-12) [4fd5b59a] Successfully removed unneeded template/vm OVFs in Data Center 'VMTEST'
2018-06-13 12:15:35,684+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] (default task-12) [4fd5b59a] Lock freed to object 'EngineLock:{exclusiveLocks='[3edb5295-3797-4cd0-9b43-f46ec1ee7b14=OVF_UPDATE, 373efd46-8aea-4d0e-96cc-1da0debf72d0=STORAGE]', sharedLocks=''}'
2018-06-13 12:15:35,704+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand] (default task-12) [24485c23] Lock Acquired to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[3edb5295-3797-4cd0-9b43-f46ec1ee7b14=OVF_UPDATE]'}'
2018-06-13 12:15:35,714+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand] (default task-12) [24485c23] Running command: ProcessOvfUpdateForStorageDomainCommand internal: true. Entities affected : ID: 373efd46-8aea-4d0e-96cc-1da0debf72d0 Type: StorageAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2018-06-13 12:15:35,724+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-12) [24485c23] START, SetVolumeDescriptionVDSCommand( SetVolumeDescriptionVDSCommandParameters:{storagePoolId='3edb5295-3797-4cd0-9b43-f46ec1ee7b14', ignoreFailoverLimit='false', storageDomainId='373efd46-8aea-4d0e-96cc-1da0debf72d0', imageGroupId='cb04b55c-10fb-46fe-b9de-3c133a94e6a5', imageId='a1e7554d-530c-4c07-a4b5-459a1c509e39'}), log id: 747d674f
2018-06-13 12:15:35,724+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-12) [24485c23] -- executeIrsBrokerCommand: calling 'setVolumeDescription', parameters:
2018-06-13 12:15:35,724+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-12) [24485c23] ++ spUUID=3edb5295-3797-4cd0-9b43-f46ec1ee7b14
2018-06-13 12:15:35,724+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-12) [24485c23] ++ sdUUID=373efd46-8aea-4d0e-96cc-1da0debf72d0
2018-06-13 12:15:35,724+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-12) [24485c23] ++ imageGroupGUID=cb04b55c-10fb-46fe-b9de-3c133a94e6a5
2018-06-13 12:15:35,724+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-12) [24485c23] ++ volUUID=a1e7554d-530c-4c07-a4b5-459a1c509e39
2018-06-13 12:15:35,724+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-12) [24485c23] ++ description={"Updated":false,"Last Updated":"Thu May 24 12:02:22 CEST 2018","Storage Domains":[{"uuid":"373efd46-8aea-4d0e-96cc-1da0debf72d0"}],"Disk Description":"OVF_STORE"}
2018-06-13 12:15:35,827+02 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-12) [24485c23] Failed in 'SetVolumeDescriptionVDS' method
2018-06-13 12:15:35,831+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-12) [24485c23] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command SetVolumeDescriptionVDS failed: Could not acquire resource. Probably resource factory threw an exception.: ()
2018-06-13 12:15:35,831+02 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-12) [24485c23] Command 'SetVolumeDescriptionVDSCommand( SetVolumeDescriptionVDSCommandParameters:{storagePoolId='3edb5295-3797-4cd0-9b43-f46ec1ee7b14', ignoreFailoverLimit='false', storageDomainId='373efd46-8aea-4d0e-96cc-1da0debf72d0', imageGroupId='cb04b55c-10fb-46fe-b9de-3c133a94e6a5', imageId='a1e7554d-530c-4c07-a4b5-459a1c509e39'})' execution failed: IRSGenericException: IRSErrorException: Failed to SetVolumeDescriptionVDS, error = Could not acquire resource. Probably resource factory threw an exception.: (), code = 855
2018-06-13 12:15:35,831+02 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand] (default task-12) [24485c23] FINISH, SetVolumeDescriptionVDSCommand, log id: 747d674f
2018-06-13 12:15:35,831+02 WARN [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand] (default task-12) [24485c23] failed to update domain '373efd46-8aea-4d0e-96cc-1da0debf72d0' ovf store disk 'cb04b55c-10fb-46fe-b9de-3c133a94e6a5'
2018-06-13 12:15:35,834+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-12) [24485c23] EVENT_ID: UPDATE_FOR_OVF_STORES_FAILED(1,016), Failed to update OVF disks cb04b55c-10fb-46fe-b9de-3c133a94e6a5, OVF data isn't updated on those OVF stores (Data Center VMTEST, Storage Domain HOST_LUN_62).
2018-06-13 12:15:35,843+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-12) [24485c23] EVENT_ID: UPDATE_OVF_FOR_STORAGE_DOMAIN_FAILED(190), Failed to update VMs/Templates OVF data for Storage Domain VMHOST_LUN_62 in Data Center VMTEST.
2018-06-13 12:15:35,846+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand] (default task-12) [24485c23] Lock freed to object 'EngineLock:{exclusiveLocks='[]', sharedLocks='[3edb5295-3797-4cd0-9b43-f46ec1ee7b14=OVF_UPDATE]'}'
2018-06-13 12:15:36,031+02 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-60) [24485c23] Command 'ProcessOvfUpdateForStorageDomain' id: 'a887910e-39a1-4120-a29b-76741ade8bf6' child commands '[]' executions were completed, status 'SUCCEEDED'
2018-06-13 12:15:37,052+02 INFO [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [24485c23] Ending command 'org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStorageDomainCommand' successfully.
2018-06-13 12:15:37,059+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-31) [24485c23] EVENT_ID: USER_UPDATE_OVF_STORE(199), OVF_STORE for domain VMHOST_LUN_62 was updated by <user>@>domain>@<DOMAIN>-authz.
vdsm.log
2018-06-13 12:15:35,727+0200 INFO (jsonrpc/7) [vdsm.api] START setVolumeDescription(sdUUID=u'373efd46-8aea-4d0e-96cc-1da0debf72d0', spUUID=u'3edb5295-3797-4cd0-9b43-f46ec1ee7b14', imgUUID=u'cb04b55c-10fb-46fe-b9de-3c133a94e6a5', volUU ID=u'a1e7554d-530c-4c07-a4b5-459a1c509e39', description=u'{"Updated":false,"Last Updated":"Thu May 24 12:02:22 CEST 2018","Storage Domains":[{"uuid":"373efd46-8aea-4d0e-96cc-1da0debf72d0"}],"Disk Description":"OVF_STORE"}', options=Non e) from=::ffff:<IP>,54686, flow_id=24485c23, task_id=70941873-0296-4ed0-94c8-b51290cd6963 (api:46)
2018-06-13 12:15:35,825+0200 WARN (jsonrpc/7) [storage.ResourceManager] Resource factory failed to create resource '01_img_373efd46-8aea-4d0e-96cc-1da0debf72d0.cb04b55c-10fb-46fe-b9de-3c133a94e6a5'. Canceling request. (resourceManager :543)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 539, in registerResource
obj = namespaceObj.factory.createResource(name, lockType)
File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", line 193, in createResource
lockType)
File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", line 122, in __getResourceCandidatesList
imgUUID=resourceName)
File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 206, in getChain
if len(uuidlist) == 1 and srcVol.isShared():
File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1434, in isShared
return self._manifest.isShared()
File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 141, in isShared
return self.getVolType() == sc.type2name(sc.SHARED_VOL)
File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 134, in getVolType
self.voltype = self.getMetaParam(sc.VOLTYPE)
File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 118, in getMetaParam
meta = self.getMetadata()
File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 112, in getMetadata
md = VolumeMetadata.from_lines(lines)
File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py", line 103, in from_lines
"Missing metadata key: %s: found: %s" % (e, md))
MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing metadata key: 'DOMAIN': found: {}",)
2018-06-13 12:15:35,825+0200 WARN (jsonrpc/7) [storage.ResourceManager.Request] (ResName='01_img_373efd46-8aea-4d0e-96cc-1da0debf72d0.cb04b55c-10fb-46fe-b9de-3c133a94e6a5', ReqID='dc9ebbc2-5cfa-447d-b2be-40ed2cf81992') Tried to cancel a processed request (resourceManager:187)
2018-06-13 12:15:35,825+0200 INFO (jsonrpc/7) [vdsm.api] FINISH setVolumeDescription error=Could not acquire resource. Probably resource factory threw an exception.: () from=::ffff:<IP>,54686, flow_id=24485c23, task_id=70941873 -0296-4ed0-94c8-b51290cd6963 (api:50)
2018-06-13 12:15:35,825+0200 ERROR (jsonrpc/7) [storage.TaskManager.Task] (Task='70941873-0296-4ed0-94c8-b51290cd6963') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in setVolumeDescription
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1294, in setVolumeDescription
pool.setVolumeDescription(sdUUID, imgUUID, volUUID, description)
File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper
return method(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 2011, in setVolumeDescription
with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE):
File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 1025, in acquireResource
return _manager.acquireResource(namespace, name, lockType, timeout=timeout)
File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 475, in acquireResource
raise se.ResourceAcqusitionFailed()
ResourceAcqusitionFailed: Could not acquire resource. Probably resource factory threw an exception.: ()
2018-06-13 12:15:35,826+0200 INFO (jsonrpc/7) [storage.TaskManager.Task] (Task='70941873-0296-4ed0-94c8-b51290cd6963') aborting: Task is aborted: u'Could not acquire resource. Probably resource factory threw an exception.: ()' - code 100 (task:1181)
2018-06-13 12:15:35,826+0200 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH setVolumeDescription error=Could not acquire resource. Probably resource factory threw an exception.: () (dispatcher:82)
2018-06-13 12:15:35,826+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Volume.setDescription failed (error 855) in 0.10 seconds (__init__:573)
2018-06-13 12:15:38,953+0200 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:<IP>,54666 (api:46)
2018-06-13 12:15:38,956+0200 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': (suppressed)} from=::ffff:<IP>,54666 (api:52)
2018-06-13 12:15:38,957+0200 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:573)
2018-06-13 12:15:39,406+0200 INFO (jsonrpc/4) [vdsm.api] START getSpmStatus(spUUID=u'3edb5295-3797-4cd0-9b43-f46ec1ee7b14', options=None) from=::ffff:<IP>,54666, task_id=eabfe183-dfb0-4982-b7ea-beacca74aeef (api:46)
2018-06-13 12:15:39,410+0200 INFO (jsonrpc/4) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 2, 'spmStatus': 'SPM', 'spmLver': 20L}} from=::ffff:<IP>,54666, task_id=eabfe183-dfb0-4982-b7ea-beacca74aeef (api:52)
2018-06-13 12:15:39,410+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:573)
2018-06-13 12:15:39,416+0200 INFO (jsonrpc/1) [vdsm.api] START getStoragePoolInfo(spUUID=u'3edb5295-3797-4cd0-9b43-f46ec1ee7b14', options=None) from=::ffff:<IP>,54686, task_id=b2003a6f-dd74-47ab-b4f0-95ffb54dc51d (api:46)
2018-06-13 12:15:39,420+0200 INFO (jsonrpc/1) [vdsm.api] FINISH getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': '', 'pool_status': 'connected', 'lver': 20L, 'domains': u'373efd46-8aea-4d0e-96cc-1da0debf72d0: Active,9ed4b0d2-c555-4b27-8f88-75c47a99ad98:Attached,efd78d36-c395-4e9a-a46e-6059fa53756d:Active,3675435e-851e-4236-81da-fce1cc027238:Active', 'master_uuid': 'efd78d36-c395-4e9a-a46e-6059fa53756d', 'version': '4', 'spm_id': 2, 'type': 'FCP', 'master_ver': 12}, 'dominfo': {u'373efd46-8aea-4d0e-96cc-1da0debf72d0': {'status': u'Active', 'diskfree': '8722541707264', 'isoprefix': '', 'alerts': [], 'disktotal': '8795690369024', 'version': 4}, u'9ed4b0d2-c555-4b27-8f88-75c 47a99ad98': {'status': u'Attached', 'isoprefix': '', 'alerts': []}, u'efd78d36-c395-4e9a-a46e-6059fa53756d': {'status': u'Active', 'diskfree': '8718783610880', 'isoprefix': '', 'alerts': [], 'disktotal': '8795690369024', 'version': 4}, u'3675435e-851e-4236-81da-fce1cc027238': {'status': u'Active', 'diskfree': '8713280684032', 'isoprefix': '', 'alerts': [], 'disktotal': '8795690369024', 'version': 4}}} from=::ffff:<IP>,54686, task_id=b2003a6f-dd74-47ab-b4f0-95 ffb54dc51d (api:52)
2018-06-13 12:15:39,421+0200 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call StoragePool.getInfo succeeded in 0.01 seconds (__init__:573)
All the best,
Oliver
6 years, 5 months
Unable to backend oVirt with Cinder
by Logan Kuhn
------=_Part_51316288_608143832.1472587678781
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I've got Cinder configured and pointed at Ceph for it's back end storage. I can run ceph commands on the cinder machine and cinder is configured for noauth and I've also tried it with Keystone for auth. I can run various cinder commands and it'll return as expected.
When I configure it in oVirt it'll add the external provider fine, but when I go to create a disk it doesn't populate the volume type field, it's just empty. The corresponding command for cinder: cinder type-list and cinder type-show <name> returns fine and it is public.
Ovirt and Cinder are on the same host so it isn't a firewall issue.
Cinder config:
[DEFAULT]
rpc_backend = rabbit
#auth_strategy = keystone
auth_strategy = noauth
enabled_backends = ceph
#glance_api_servers = http://10.128.7.252:9292
#glance_api_version = 2
#[keystone_authtoken]
#auth_uri = http://10.128.7.252:5000/v3
#auth_url = http://10.128.7.252:35357/v3
#auth_type = password
#memcached_servers = localhost:11211
#project_domain_name = default
#user_domain_name = default
#project_name = services
#username = user
#password = pass
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = ovirt-images
rbd_user = cinder
rbd_secret_uuid = <secret>
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
#glance_api_version = 2
[database]
connection = postgresql://user:pass@10.128.2.33/cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_rabbit]
rabbit_host = localhost
rabbit_port = 5672
rabbit_userid = user
rabbit_password = pass
Regards,
Logan
------=_Part_51316288_608143832.1472587678781
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Arial; font-size: 12pt; color: #0000=
00"><div>I've got Cinder configured and pointed at Ceph for it's back end s=
torage. I can run ceph commands on the cinder machine and cinder is c=
onfigured for noauth and I've also tried it with Keystone for auth. I=
can run various cinder commands and it'll return as expected. </=
div><div><br data-mce-bogus=3D"1"></div><div>When I configure it in oVirt i=
t'll add the external provider fine, but when I go to create a disk it does=
n't populate the volume type field, it's just empty. The correspondin=
g command for cinder: cinder type-list and cinder type-show <name> re=
turns fine and it is public. </div><div><br data-mce-bogus=3D"1"></div=
><div>Ovirt and Cinder are on the same host so it isn't a firewall issue.</=
div><div><br data-mce-bogus=3D"1"></div><div>Cinder config:</div><div>[DEFA=
ULT]<br>rpc_backend =3D rabbit<br>#auth_strategy =3D keystone<br>auth_strat=
egy =3D noauth<br>enabled_backends =3D ceph<br>#glance_api_servers =3D http=
://10.128.7.252:9292<br>#glance_api_version =3D 2<br><br>#[keystone_authtok=
en]<br>#auth_uri =3D http://10.128.7.252:5000/v3<br>#auth_url =3D http://10=
.128.7.252:35357/v3<br>#auth_type =3D password<br>#memcached_servers =3D lo=
calhost:11211<br>#project_domain_name =3D default<br>#user_domain_name =3D =
default<br>#project_name =3D services<br>#username =3D user<br>#passwo=
rd =3D pass<br><br>[ceph]<br>volume_driver =3D cinder.volume.drivers.rbd.RB=
DDriver<br>volume_backend_name =3D ceph<br>rbd_pool =3D ovirt-images<br>rbd=
_user =3D cinder<br>rbd_secret_uuid =3D <secret><br>rbd_ceph_con=
f =3D /etc/ceph/ceph.conf<br>rbd_flatten_volume_from_snapshot =3D true<br>r=
bd_max_clone_depth =3D 5<br>rbd_store_chunk_size =3D 4<br>rados_connect_tim=
eout =3D -1<br>#glance_api_version =3D 2<br><br>[database]<br>connection =
=3D postgresql://user:pass@10.128.2.33/cinder<br><br>[oslo_concurrency]<br>=
lock_path =3D /var/lib/cinder/tmp<br><br>[oslo_messaging_rabbit]<br>rabbit_=
host =3D localhost<br>rabbit_port =3D 5672<br>rabbit_userid =3D <span =
style=3D"color: #000000; font-family: Arial; font-size: 16px; font-style: n=
ormal; font-variant-ligatures: normal; font-variant-caps: normal; font-weig=
ht: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-a=
lign: start; text-indent: 0px; text-transform: none; white-space: normal; w=
idows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: inlin=
e !important; float: none; background-color: #ffffff;" data-mce-style=3D"co=
lor: #000000; font-family: Arial; font-size: 16px; font-style: normal; font=
-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal;=
letter-spacing: normal; line-height: normal; orphans: 2; text-align: start=
; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; w=
ord-spacing: 0px; -webkit-text-stroke-width: 0px; display: inline !importan=
t; float: none; background-color: #ffffff;">user</span><br>rabbit_password =
=3D <span style=3D"color: #000000; font-family: Arial; font-size: 16px=
; font-style: normal; font-variant-ligatures: normal; font-variant-caps: no=
rmal; font-weight: normal; letter-spacing: normal; line-height: normal; orp=
hans: 2; text-align: start; text-indent: 0px; text-transform: none; white-s=
pace: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px;=
display: inline !important; float: none; background-color: #ffffff;" data-=
mce-style=3D"color: #000000; font-family: Arial; font-size: 16px; font-styl=
e: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-=
weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; te=
xt-align: start; text-indent: 0px; text-transform: none; white-space: norma=
l; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; display: i=
nline !important; float: none; background-color: #ffffff;">pass</span></div=
><div><br></div><div data-marker=3D"__SIG_PRE__">Regards,<br>Logan</div></d=
iv></body></html>
------=_Part_51316288_608143832.1472587678781--
6 years, 5 months
Creating snapshot of a subset of disks
by Gianluca Cecchi
Hello,
I'm trying to see how to create a snapshot of a VM, but only of a subset of
its disks (actually it will be only the bootable one)
Taking the examples at
https://github.com/oVirt/ovirt-engine-sdk/tree/master/sdk/examples
I can accomodate something like this
# Get the reference to the service that manages the virtual machines:
vms_service = system_service.vms_service()
# Find the virtual machine and put into data_vm
vm = vms_service.list(
search='name=%s' % MY_VM_NAME,
all_content=True,
)[0]
logging.info(
'Found virtual machine \'%s\', the id is \'%s\'.',
vm.name, vm.id,
)
# Find the services that manage the data_vm virtual machine:
vm_service = vms_service.vm_service(vm.id)
# Send the request to create the snapshot. Note that this will return
# before the snapshot is completely created, so we will later need to
# wait till the snapshot is completely created.
snaps_service = vm_service.snapshots_service()
snap = snaps_service.add(
snapshot=types.Snapshot(
description=snap_description,
persist_memorystate=False,
),
)
This makes a snapshot of all the dsks of the vm.
I can previously filter in my case the bootable disk with something like
this:
# Locate the service that manages the disk attachments of the virtual
# machine:
disk_attachments_service = vm_service.disk_attachments_service()
# Retrieve the list of disks attachments, and print the disk details.
disk_attachments = disk_attachments_service.list()
for disk_attachment in disk_attachments:
disk = connection.follow_link(disk_attachment.disk)
print("name: %s" % disk.name)
print("id: %s" % disk.id)
print("status: %s" % disk.status)
print("bootable: %s" % disk_attachment.bootable)
print("provisioned_size: %s" % disk.provisioned_size)
So in case of an example VM with two disks I get this print out
name: padnpro_bootdisk
id: c122978a-70d7-48aa-97c5-2f17d4603b1e
status: ok
bootable: True
provisioned_size: 59055800320
name: padnpro_imp_Disk1
id: 5454b137-fb2c-46a7-b345-e6d115802582
status: ok
bootable: False
provisioned_size: 10737418240
But I haven't found then the syntax to use to specify a disk list in the
block where I create the sapshot of the VM
snap = snaps_service.add(
snapshot=types.Snapshot(
description=snap_description,
persist_memorystate=False,
disxk x x x ? ? ?
),
)
Any help in this direction?
Tahnsk,
Gianluca
6 years, 5 months
oVirt 4.2 and I/O threads configuration
by Gianluca Cecchi
Hello,
in oVirt 4.1 I could enable I/O threads for a VM and then there was a field
where I could define how many I/O threads (aka virtual scsi controllers
from vm point of view).
In 4.2 now I see only the option to enable them but not to specify the
number.
Next to the setting there is a symbol and a mouseover says "the field is
not attached to any instance type"
See this screenshot for the Edit VM -> Resource Allocation content:
https://drive.google.com/file/d/1JzCiUBnWxS9fiY64_p5TJMY0uO5ZZUID/view?us...
Any hints for setting the number?
Thanks in advance,
Gianluca
6 years, 5 months
Re: [spice-list] Fwd: Fwd: VM hanging at boot - [drm] Initialized qxl
by Christophe Fergeau
On Wed, Jun 20, 2018 at 12:38:00PM +0300, Yaniv Kaul wrote:
> On Wed, Jun 20, 2018 at 12:26 PM, Christophe Fergeau <cfergeau(a)redhat.com>
> wrote:
> > > > ---------- Forwarded message ----------
> > > > From: Leo David <leoalex(a)gmail.com>
> > > > Date: Wed, Jun 13, 2018 at 8:24 PM
> > > > Subject: VM hanging at boot - [drm] Initialized qxl
> > > > To: users(a)ovirt.org
> > > >
> > > >
> > > > Hello everyone,
> > > > I have some Centos7 vms created from template ( CentOS 7 Generic Cloud
> > > > Image v1802 for x86_64 )
> > > > I have alocated planty of resources to them, but they all have this
> > issue
> > > > when starting, they hang with the following message in console about
> > 5-7
> > > > minutes.
> > > >
> > > > [drm] Initialized qxl 0.1.0 20120117 for 0000:00:02:0 on mirror 0
> > > >
> > > > After a while, vm eventually boots up...
> > > > Running self-hosted ovirt-node 4.2
> > > > Does anyone know what could be the issue for this behavior ?
> > > > Thank you !
> >
> > There were recently some hangs at startup due to a kernel bug in the qxl
> > driver related to atomic modesetting, no idea if centos could be
> > impacted by this or not. Does the issue go away when using another
> > graphics device than QXL?
> >
>
> Can you please respond on the oVirt users mailing list?
I added it to cc:
Christophe
6 years, 5 months
ovirt upgrade 4.1 -> 4.2: host bricks down
by Alex K
Hi all,
I have a ovirt 2 node cluster for testing with self hosted engine on top
gluster.
The cluster was running on 4.1. After the upgrade to 4.2, which generally
went smoothly, I am seeing that the bricks of one of the hosts (v1) are
detected as down, while the gluster is ok when checked with command lines
and all volumes mounted.
Below is the error that the engine logs:
2018-06-17 00:21:26,309+03 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler2)
[98d7e79] Error while refreshing brick statuses for volume 'vms' of cluster
'test': null
2018-06-17 00:21:26,318+03 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
(DefaultQuartzScheduler2) [98d7e79] Command
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = v0.test-group.com,
VdsIdVDSCommandParametersBase:{hostId='d5a96118-ca49-411f-86cb-280c7f9c421f'})'
execution failed: null
2018-06-17 00:21:26,323+03 ERROR
[org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
(DefaultQuartzScheduler2) [98d7e79] Command
'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = v1.test-group.com,
VdsIdVDSCommandParametersBase:{hostId='12dfea4a-8142-484e-b912-0cbd5f281aba'})'
execution failed: null
2018-06-17 00:21:27,015+03 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler9)
[426e7c3d] Failed to acquire lock and wait lock
'EngineLock:{exclusiveLocks='[00000002-0002-0002-0002-00000000017a=GLUSTER]',
sharedLocks=''}'
2018-06-17 00:21:27,926+03 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler2)
[98d7e79] Error while refreshing brick statuses for volume 'engine' of
cluster 'test': null
Apart from this everything else is operating normally and VMs are running
on both hosts.
Any idea to isolate this issue?
Thanx,
Alex
6 years, 5 months
cloud-init reverting static network settings to DHCP on shutdown and restart
by geoff.carr@beazley.com
Setting static network settings in the "Initial Run" section works fine when deploying a VM from template. When using "Run" to start the VM, customization works as expected. On a reboot the network settings are maintained however if the VM is shutdown and restarted the settings are reverted to DHCP. When editing the VM the cloud-init settings are still set and a visible in the "Initial Run" section.
cloud-init (0.7.9-24.el7.centos) - Previously 0.7.9-20.el7.centos but upgraded to see if it would fix the issue.
oVirt engine (4.2.4.1-1.el7)
Guest OS (CentOS Linux release 7.4.1708)
There is another thread that reports exactly the same issue and behavior here: -
https://lists.ovirt.org/pipermail/users/2018-March/087860.html
Grateful for any thoughts or assistance.
Regards
Geoff
6 years, 5 months
Hardering oVirt Engine
by Punaatua PK
Hello,
we are subject to PCI-DSS. I have some questions. We currently have setup oVirt in our environnement.
We created 2 Datacenter.
- one with a cluster with hosted engine on gluster (Hyperconverged env) which represents the "LAN" part
- one with a cluster with gluster storage wich is the DMZ
In PCI-DSS we have to secure communication (use HTTPs as much as possible). I did saw that ovirt-ha-agent (on hosted-engine capable host) check the status of the engine by sending GET request on the hosted-engine on port 80 (the same check that hosted-engine --vm-status did in fact).
Since ovirt 4.2.2, with the introduction of gluster eventing, a new flow (HTTP post resquest) is needed from gluster nodes to the engine. (In my case, it's a flow from the DMZ to the LAN part in HTTP (non secure)
Here is my question. Is it possible to "hardering" this part of the engine ?
Another question out of PCI scope. Events like warning and error in the dashboard are clean each days. I tried to find which process did that (look into /etc/cron.daily, root crontab, etc) on the engine
without succes. Is there any maintenance task that is run periodicaly ? Could we have the list of all the engine's task ? (regulary check the status of host, vm, storage) also the frequency ?
I would appreciate the help. (Great great product ovirt !) Thank you for your jobs ! We did manage KVM hypervisor as standalone machine without all the power that libvirt provides. No need to spend lot of money into licencing product (VSphere and co)
6 years, 5 months