[ovirt-users] Error creating a storage domain

Raz Tamir ratamir at redhat.com
Tue Jul 21 10:43:30 UTC 2015


Thanks. 
I see this on newer vdsm version 



vdsm-python-zombiereaper-4.16.21-1.el7ev.noarch 
vdsm-4.16.21-1.el7ev.x86_64 
vdsm-python-4.16.21-1.el7ev.noarch 
vdsm-cli-4.16.21-1.el7ev.noarch 
vdsm-yajsonrpc-4.16.21-1.el7ev.noarch 
vdsm-hook-vhostmd-4.16.21-1.el7ev.noarch 
vdsm-hook-ethtool-options-4.16.21-1.el7ev.noarch 
ovirt-node-plugin-vdsm-0.2.0-25.el7ev.noarch 
vdsm-xmlrpc-4.16.21-1.el7ev.noarch 
vdsm-jsonrpc-4.16.21-1.el7ev.noarch 
vdsm-reg-4.16.21-1.el7ev.noarch 




you can can track for updates: 

https://bugzilla.redhat.com/show_bug.cgi?id=1245147 






Thanks in advance, 
Raz Tamir 
ratamir at redhat.com 
RedHat Israel 
RHEV-M QE Storage team 

----- Original Message -----

From: "Jurriën Bloemen" <Jurrien.Bloemen at dmc.amcnetworks.com> 
To: "Raz Tamir" <ratamir at redhat.com> 
Cc: users at ovirt.org 
Sent: Tuesday, July 21, 2015 1:09:38 PM 
Subject: Re: [ovirt-users] Error creating a storage domain 

Hi Raz, 

4.16.10-8 

Kind regards, 

Jurriën Bloemen 

vdsm-cli-4.16.10-8.gitc937927.el7.noarch 
vdsm-4.16.10-8.gitc937927.el7.x86_64 
vdsm-python-zombiereaper-4.16.10-8.gitc937927.el7.noarch 
vdsm-xmlrpc-4.16.10-8.gitc937927.el7.noarch 
vdsm-jsonrpc-4.16.10-8.gitc937927.el7.noarch 
vdsm-reg-4.16.10-8.gitc937927.el7.noarch 
vdsm-hook-ethtool-options-4.16.10-8.gitc937927.el7.noarch 
ovirt-node-plugin-vdsm-0.2.2-5.el7.noarch 
vdsm-python-4.16.10-8.gitc937927.el7.noarch 
vdsm-yajsonrpc-4.16.10-8.gitc937927.el7.noarch 
vdsm-gluster-4.16.10-8.gitc937927.el7.noarch 


On 21-07-15 12:02, Raz Tamir wrote: 



Hi Jurriën, 
What is the host vdsm version? 




Thanks in advance, 
Raz Tamir 
ratamir at redhat.com 
RedHat Israel 
RHEV-M QE Storage team 

----- Original Message -----

From: "Jurriën Bloemen" <Jurrien.Bloemen at dmc.amcnetworks.com> 
To: users at ovirt.org 
Sent: Tuesday, July 21, 2015 10:57:10 AM 
Subject: Re: [ovirt-users] Error creating a storage domain 

Hi all, 

I have to add that this is the latest 3.5 version. Adding the other glusterfs storage was done by an older version of 3.5. I don't know if something has changed in between version? 

Thanks, 

Jurriën 

ovirt-engine-websocket-proxy-3.5.3.1-1.el7.centos.noarch 
ovirt-release35-004-1.noarch 
ovirt-engine-backend-3.5.3.1-1.el7.centos.noarch 
ovirt-host-deploy-1.3.1-1.el7.noarch 
ovirt-engine-userportal-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-jboss-as-7.1.1-1.el7.x86_64 
ovirt-engine-lib-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-setup-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-webadmin-portal-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-dbscripts-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-extensions-api-impl-3.5.3.1-1.el7.centos.noarch 
ovirt-image-uploader-3.5.1-1.el7.centos.noarch 
ovirt-engine-cli-3.5.0.5-1.el7.centos.noarch 
ovirt-host-deploy-java-1.3.1-1.el7.noarch 
ovirt-iso-uploader-3.5.2-1.el7.centos.noarch 
ovirt-engine-setup-base-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-setup-plugin-ovirt-engine-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-setup-plugin-websocket-proxy-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-restapi-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-tools-3.5.3.1-1.el7.centos.noarch 
ovirt-engine-sdk-python-3.5.2.1-1.el7.centos.noarch 
ovirt-engine-setup-plugin-ovirt-engine-common-3.5.3.1-1.el7.centos.noarch 



On 20-07-15 18:30, Bloemen, Jurriën wrote: 

<blockquote>



Hi all, 




I have some trouble with adding a gluster storage domain: 




Error while executing action AddGlusterFsStorageDomain: Error creating a storage domain 




I have mounted the gluster volume by hand and that works fine. Also checked the rights and ownership and they are 755 and vdsm:kvm. 

I also tried to reinstall the oVirtH node. 




May worth of telling is that I already have one gluster mount running perfectly but add a new one does not work. 




Can somebody help me? 




This is the engine.log of the oVirt manager: 




2015-07-20 18:23:30,766 INFO [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp--127.0.0.1-8702-3) [61712435] Lock Acquired to object EngineLock [exclusiveLocks= key: superstore001-stor.cs.example.com:/ovirtprd01 value: STORAGE_CONNECTION 

, sharedLocks= ] 

2015-07-20 18:23:30,791 INFO [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp--127.0.0.1-8702-3) [61712435] Running command: AddStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 

2015-07-20 18:23:30,811 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--127.0.0.1-8702-3) [61712435] START, ConnectStorageServerVDSCommand(HostName = twin189, HostId = ad392b6d-12b8-4f4d-98a1-49e87443eddc, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = GLUSTERFS, connectionList = [{ id: null, connection: superstore001-stor.cs.example.com:/ovirtprd01, iqn: null, vfsType: glusterfs, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 44c1a9f2 

2015-07-20 18:23:31,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--127.0.0.1-8702-3) [61712435] FINISH, ConnectStorageServerVDSCommand, return: {00000000-0000-0000-0000-000000000000=0}, log id: 44c1a9f2 

2015-07-20 18:23:31,085 INFO [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp--127.0.0.1-8702-3) [61712435] Lock freed to object EngineLock [exclusiveLocks= key: superstore001-stor.cs.example.com:/ovirtprd01 value: STORAGE_CONNECTION 

, sharedLocks= ] 

2015-07-20 18:23:31,138 WARN [org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (ajp--127.0.0.1-8702-3) [8d11fec] The message key AddGlusterFsStorageDomain is missing from bundles/ExecutionMessages 

2015-07-20 18:23:31,156 INFO [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (ajp--127.0.0.1-8702-3) [8d11fec] Running command: AddGlusterFsStorageDomainCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 

2015-07-20 18:23:31,183 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--127.0.0.1-8702-3) [8d11fec] START, ConnectStorageServerVDSCommand(HostName = twin189, HostId = ad392b6d-12b8-4f4d-98a1-49e87443eddc, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = GLUSTERFS, connectionList = [{ id: ece6ad42-be90-4980-98dd-d7ae12cc6709, connection: superstore001-stor.cs.example.com:/ovirtprd01, iqn: null, vfsType: glusterfs, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 3bb1d8b0 

2015-07-20 18:23:31,207 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--127.0.0.1-8702-3) [8d11fec] FINISH, ConnectStorageServerVDSCommand, return: {ece6ad42-be90-4980-98dd-d7ae12cc6709=0}, log id: 3bb1d8b0 

2015-07-20 18:23:31,223 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-3) [8d11fec] START, CreateStorageDomainVDSCommand(HostName = twin189, HostId = ad392b6d-12b8-4f4d-98a1-49e87443eddc, storageDomain=StorageDomainStatic[ovirtprd01, d6df7930-342a-493a-b70b-fb1c52b0828c], args=superstore001-stor.cs.example.com:/ovirtprd01), log id: 42df6467 

2015-07-20 18:23:31,356 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-3) [8d11fec] Failed in CreateStorageDomainVDS method 

2015-07-20 18:23:31,358 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-3) [8d11fec] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand return value 

StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=351, mMessage=Error creating a storage domain: (u'storageType=7, sdUUID=d6df7930-342a-493a-b70b-fb1c52b0828c, domainName=ovirtprd01, domClass=1, typeSpecificArg=superstore001-stor.cs.example.com:/ovirtprd01 domVersion=3',)]] 

2015-07-20 18:23:31,364 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-3) [8d11fec] HostName = twin189 

2015-07-20 18:23:31,366 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-3) [8d11fec] Command CreateStorageDomainVDSCommand(HostName = twin189, HostId = ad392b6d-12b8-4f4d-98a1-49e87443eddc, storageDomain=StorageDomainStatic[ovirtprd01, d6df7930-342a-493a-b70b-fb1c52b0828c], args=superstore001-stor.cs.example.com:/ovirtprd01) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Error creating a storage domain: (u'storageType=7, sdUUID=d6df7930-342a-493a-b70b-fb1c52b0828c, domainName=ovirtprd01, domClass=1, typeSpecificArg=superstore001-stor.cs.example.com:/ovirtprd01 domVersion=3',), code = 351 

2015-07-20 18:23:31,375 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-3) [8d11fec] FINISH, CreateStorageDomainVDSCommand, log id: 42df6467 

2015-07-20 18:23:31,377 ERROR [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (ajp--127.0.0.1-8702-3) [8d11fec] Command org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Error creating a storage domain: (u'storageType=7, sdUUID=d6df7930-342a-493a-b70b-fb1c52b0828c, domainName=ovirtprd01, domClass=1, typeSpecificArg=superstore001-stor.cs.example.com:/ovirtprd01 domVersion=3',), code = 351 (Failed with error StorageDomainCreationError and code 351) 

2015-07-20 18:23:31,386 INFO [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (ajp--127.0.0.1-8702-3) [8d11fec] Command [id=b1e0d0ec-526b-42e4-997f-ff55614e0797]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StorageDomainDynamic; snapshot: d6df7930-342a-493a-b70b-fb1c52b0828c. 

2015-07-20 18:23:31,390 INFO [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (ajp--127.0.0.1-8702-3) [8d11fec] Command [id=b1e0d0ec-526b-42e4-997f-ff55614e0797]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot: d6df7930-342a-493a-b70b-fb1c52b0828c. 

2015-07-20 18:23:31,397 ERROR [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (ajp--127.0.0.1-8702-3) [8d11fec] Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand. 

2015-07-20 18:23:31,404 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-3) [8d11fec] Correlation ID: 8d11fec, Job ID: 9c59789a-ab38-4171-96c5-3d8da688e2bb, Call Stack: null, Custom Event ID: -1, Message: Failed to add Storage Domain ovirtprd01. (User: admin at internal) 

2015-07-20 18:23:31,451 INFO [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (ajp--127.0.0.1-8702-3) [406a3269] Lock Acquired to object EngineLock [exclusiveLocks= key: ece6ad42-be90-4980-98dd-d7ae12cc6709 value: STORAGE_CONNECTION 

key: superstore001-stor.cs.example.com:/ovirtprd01 value: STORAGE_CONNECTION 

, sharedLocks= ] 

2015-07-20 18:23:31,607 INFO [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (ajp--127.0.0.1-8702-3) [406a3269] Running command: RemoveStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN 

2015-07-20 18:23:31,615 INFO [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (ajp--127.0.0.1-8702-3) [406a3269] Removing connection ece6ad42-be90-4980-98dd-d7ae12cc6709 from database 

2015-07-20 18:23:31,631 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (ajp--127.0.0.1-8702-3) [406a3269] START, DisconnectStorageServerVDSCommand(HostName = twin189, HostId = ad392b6d-12b8-4f4d-98a1-49e87443eddc, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = GLUSTERFS, connectionList = [{ id: ece6ad42-be90-4980-98dd-d7ae12cc6709, connection: superstore001-stor.cs.example.com:/ovirtprd01, iqn: null, vfsType: glusterfs, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 372ee09d 

2015-07-20 18:23:31,723 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (ajp--127.0.0.1-8702-3) [406a3269] FINISH, DisconnectStorageServerVDSCommand, return: {ece6ad42-be90-4980-98dd-d7ae12cc6709=0}, log id: 372ee09d 

2015-07-20 18:23:31,728 INFO [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (ajp--127.0.0.1-8702-3) [406a3269] Lock freed to object EngineLock [exclusiveLocks= key: ece6ad42-be90-4980-98dd-d7ae12cc6709 value: STORAGE_CONNECTION 

key: superstore001-stor.cs.example.com:/ovirtprd01 value: STORAGE_CONNECTION 

, sharedLocks= ] 




And this is de vdsm.log of the hypervisor: 




JsonRpc (StompReactor)::DEBUG::2015-07-20 16:23:25,456::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> 

JsonRpcServer::DEBUG::2015-07-20 16:23:25,458::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request 

Thread-199::DEBUG::2015-07-20 16:23:25,460::stompReactor::163::yajsonrpc.StompServer::(send) Sending response 

JsonRpc (StompReactor)::DEBUG::2015-07-20 16:23:28,483::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> 

JsonRpcServer::DEBUG::2015-07-20 16:23:28,484::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request 

Thread-200::DEBUG::2015-07-20 16:23:28,485::stompReactor::163::yajsonrpc.StompServer::(send) Sending response 

JsonRpc (StompReactor)::DEBUG::2015-07-20 16:23:30,124::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> 

JsonRpcServer::DEBUG::2015-07-20 16:23:30,125::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request 

Thread-201::DEBUG::2015-07-20 16:23:30,125::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StoragePool.connectStorageServer' in bridge with {u'connectionParams': [{u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'superstore001-stor.cs.example.com:/ovirtprd01', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password': u'', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-000000000000', u'domainType': 7} 

Thread-201::DEBUG::2015-07-20 16:23:30,127::task::595::Storage.TaskManager.Task::(_updateState) Task=`417462a8-3817-4e44-ac89-20ea4e3c8709`::moving from state init -> state preparing 

Thread-201:: INFO::2015-07-20 16:23:30,127::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'port': u'', u'connection': u'superstore001-stor.cs.example.com:/ovirtprd01', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password': '******', u'id': u'00000000-0000-0000-0000-000000000000'}], options=None) 

Thread-201::DEBUG::2015-07-20 16:23:30,137::fileUtils::142::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/glusterSD/superstore001-stor.cs.example.com:_ovirtprd01 

Thread-201::DEBUG::2015-07-20 16:23:30,138::mount::227::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /usr/bin/mount -t glusterfs superstore001-stor.cs.example.com:/ovirtprd01 /rhev/data-center/mnt/glusterSD/superstore001-stor.cs.example.com:_ovirtprd01 (cwd None) 

Thread-201::DEBUG::2015-07-20 16:23:30,371::hsm::2375::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* 

Thread-201::DEBUG::2015-07-20 16:23:30,377::hsm::2387::Storage.HSM::(__prefetchDomains) Found SD uuids: () 

Thread-201::DEBUG::2015-07-20 16:23:30,377::hsm::2443::Storage.HSM::(connectStorageServer) knownSDs: {} 

Thread-201:: INFO::2015-07-20 16:23:30,377::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]} 

Thread-201::DEBUG::2015-07-20 16:23:30,377::task::1191::Storage.TaskManager.Task::(prepare) Task=`417462a8-3817-4e44-ac89-20ea4e3c8709`::finished: {'statuslist': [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]} 

Thread-201::DEBUG::2015-07-20 16:23:30,377::task::595::Storage.TaskManager.Task::(_updateState) Task=`417462a8-3817-4e44-ac89-20ea4e3c8709`::moving from state preparing -> state finished 

Thread-201::DEBUG::2015-07-20 16:23:30,377::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 

Thread-201::DEBUG::2015-07-20 16:23:30,378::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 

Thread-201::DEBUG::2015-07-20 16:23:30,378::task::993::Storage.TaskManager.Task::(_decref) Task=`417462a8-3817-4e44-ac89-20ea4e3c8709`::ref 0 aborting False 

Thread-201::DEBUG::2015-07-20 16:23:30,378::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'StoragePool.connectStorageServer' in bridge with [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}] 

Thread-201::DEBUG::2015-07-20 16:23:30,378::stompReactor::163::yajsonrpc.StompServer::(send) Sending response 

JsonRpc (StompReactor)::DEBUG::2015-07-20 16:23:30,497::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> 

JsonRpcServer::DEBUG::2015-07-20 16:23:30,497::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request 

Thread-203::DEBUG::2015-07-20 16:23:30,497::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StoragePool.connectStorageServer' in bridge with {u'connectionParams': [{u'id': u'ece6ad42-be90-4980-98dd-d7ae12cc6709', u'connection': u'superstore001-stor.cs.example.com:/ovirtprd01', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password': u'', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-000000000000', u'domainType': 7} 

Thread-203::DEBUG::2015-07-20 16:23:30,498::task::595::Storage.TaskManager.Task::(_updateState) Task=`627a1260-b231-493f-b0f1-14c0a8501f49`::moving from state init -> state preparing 

Thread-203:: INFO::2015-07-20 16:23:30,499::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'port': u'', u'connection': u'superstore001-stor.cs.example.com:/ovirtprd01', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password': '******', u'id': u'ece6ad42-be90-4980-98dd-d7ae12cc6709'}], options=None) 

Thread-203::DEBUG::2015-07-20 16:23:30,505::hsm::2375::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* 

Thread-203::DEBUG::2015-07-20 16:23:30,511::hsm::2387::Storage.HSM::(__prefetchDomains) Found SD uuids: () 

Thread-203::DEBUG::2015-07-20 16:23:30,511::hsm::2443::Storage.HSM::(connectStorageServer) knownSDs: {} 

Thread-203:: INFO::2015-07-20 16:23:30,511::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': u'ece6ad42-be90-4980-98dd-d7ae12cc6709'}]} 

Thread-203::DEBUG::2015-07-20 16:23:30,511::task::1191::Storage.TaskManager.Task::(prepare) Task=`627a1260-b231-493f-b0f1-14c0a8501f49`::finished: {'statuslist': [{'status': 0, 'id': u'ece6ad42-be90-4980-98dd-d7ae12cc6709'}]} 

Thread-203::DEBUG::2015-07-20 16:23:30,511::task::595::Storage.TaskManager.Task::(_updateState) Task=`627a1260-b231-493f-b0f1-14c0a8501f49`::moving from state preparing -> state finished 

Thread-203::DEBUG::2015-07-20 16:23:30,511::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 

Thread-203::DEBUG::2015-07-20 16:23:30,511::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 

Thread-203::DEBUG::2015-07-20 16:23:30,511::task::993::Storage.TaskManager.Task::(_decref) Task=`627a1260-b231-493f-b0f1-14c0a8501f49`::ref 0 aborting False 

Thread-203::DEBUG::2015-07-20 16:23:30,511::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'StoragePool.connectStorageServer' in bridge with [{'status': 0, 'id': u'ece6ad42-be90-4980-98dd-d7ae12cc6709'}] 

Thread-203::DEBUG::2015-07-20 16:23:30,512::stompReactor::163::yajsonrpc.StompServer::(send) Sending response 

JsonRpc (StompReactor)::DEBUG::2015-07-20 16:23:30,532::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> 

JsonRpcServer::DEBUG::2015-07-20 16:23:30,533::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request 

Thread-205::DEBUG::2015-07-20 16:23:30,533::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StorageDomain.create' in bridge with {u'name': u'ovirtprd01', u'domainType': 7, u'domainClass': 1, u'typeArgs': u'superstore001-stor.cs.example.com:/ovirtprd01', u'version': u'3', u'storagedomainID': u'd6df7930-342a-493a-b70b-fb1c52b0828c'} 

Thread-205::DEBUG::2015-07-20 16:23:30,534::task::595::Storage.TaskManager.Task::(_updateState) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::moving from state init -> state preparing 

Thread-205:: INFO::2015-07-20 16:23:30,534::logUtils::44::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=7, sdUUID=u'd6df7930-342a-493a-b70b-fb1c52b0828c', domainName=u'ovirtprd01', typeSpecificArg=u'superstore001-stor.cs.example.com:/ovirtprd01', domClass=1, domVersion=u'3', options=None) 

Thread-205::DEBUG::2015-07-20 16:23:30,534::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) 

Thread-205::DEBUG::2015-07-20 16:23:30,534::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method 

Thread-205::DEBUG::2015-07-20 16:23:30,534::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) 

Thread-205::DEBUG::2015-07-20 16:23:30,535::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method 

Thread-205::DEBUG::2015-07-20 16:23:30,535::iscsi::403::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds 

Thread-205::DEBUG::2015-07-20 16:23:30,535::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) 

Thread-205::DEBUG::2015-07-20 16:23:30,545::misc::751::Storage.SamplingMethod::(__call__) Returning last result 

Thread-205::DEBUG::2015-07-20 16:23:30,545::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) 

Thread-205::DEBUG::2015-07-20 16:23:30,545::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method 

Thread-205::DEBUG::2015-07-20 16:23:30,545::hba::53::Storage.HBA::(rescan) Starting scan 

Thread-205::DEBUG::2015-07-20 16:23:30,545::utils::739::Storage.HBA::(execCmd) /usr/bin/sudo -n /usr/libexec/vdsm/fc-scan (cwd None) 

Thread-205::DEBUG::2015-07-20 16:23:30,565::hba::66::Storage.HBA::(rescan) Scan finished 

Thread-205::DEBUG::2015-07-20 16:23:30,565::misc::751::Storage.SamplingMethod::(__call__) Returning last result 

Thread-205::DEBUG::2015-07-20 16:23:30,565::multipath::128::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /sbin/multipath (cwd None) 

Thread-205::DEBUG::2015-07-20 16:23:30,606::multipath::128::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 

Thread-205::DEBUG::2015-07-20 16:23:30,607::lvm::489::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex 

Thread-205::DEBUG::2015-07-20 16:23:30,607::lvm::491::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex 

Thread-205::DEBUG::2015-07-20 16:23:30,607::lvm::500::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex 

Thread-205::DEBUG::2015-07-20 16:23:30,607::lvm::502::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex 

Thread-205::DEBUG::2015-07-20 16:23:30,607::lvm::520::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex 

Thread-205::DEBUG::2015-07-20 16:23:30,607::lvm::522::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex 

Thread-205::DEBUG::2015-07-20 16:23:30,608::misc::751::Storage.SamplingMethod::(__call__) Returning last result 

Thread-205::ERROR::2015-07-20 16:23:30,608::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain d6df7930-342a-493a-b70b-fb1c52b0828c 

Thread-205::ERROR::2015-07-20 16:23:30,608::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain d6df7930-342a-493a-b70b-fb1c52b0828c 

Thread-205::DEBUG::2015-07-20 16:23:30,608::lvm::365::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex 

Thread-205::DEBUG::2015-07-20 16:23:30,609::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/3600304801a8505001cd001f904750aa2|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name d6df7930-342a-493a-b70b-fb1c52b0828c (cwd None) 

Thread-205::DEBUG::2015-07-20 16:23:30,630::lvm::288::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' Volume group "d6df7930-342a-493a-b70b-fb1c52b0828c" not found\n Skipping volume group d6df7930-342a-493a-b70b-fb1c52b0828c\n'; <rc> = 5 

Thread-205::WARNING::2015-07-20 16:23:30,631::lvm::370::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group "d6df7930-342a-493a-b70b-fb1c52b0828c" not found', ' Skipping volume group d6df7930-342a-493a-b70b-fb1c52b0828c'] 

Thread-205::DEBUG::2015-07-20 16:23:30,631::lvm::407::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex 

Thread-205::ERROR::2015-07-20 16:23:30,644::sdc::143::Storage.StorageDomainCache::(_findDomain) domain d6df7930-342a-493a-b70b-fb1c52b0828c not found 

Traceback (most recent call last): 

File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain 

File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain 

StorageDomainDoesNotExist: Storage domain does not exist: (u'd6df7930-342a-493a-b70b-fb1c52b0828c',) 

Thread-205:: INFO::2015-07-20 16:23:30,644::nfsSD::69::Storage.StorageDomain::(create) sdUUID=d6df7930-342a-493a-b70b-fb1c52b0828c domainName=ovirtprd01 remotePath=superstore001-stor.cs.example.com:/ovirtprd01 domClass=1 

Thread-205::ERROR::2015-07-20 16:23:30,659::task::866::Storage.TaskManager.Task::(_setError) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::Unexpected error 

Traceback (most recent call last): 

File "/usr/share/vdsm/storage/task.py", line 873, in _run 

File "/usr/share/vdsm/logUtils.py", line 45, in wrapper 

File "/usr/share/vdsm/storage/hsm.py", line 2670, in createStorageDomain 

File "/usr/share/vdsm/storage/nfsSD.py", line 80, in create 

File "/usr/share/vdsm/storage/nfsSD.py", line 49, in _preCreateValidation 

File "/usr/share/vdsm/storage/fileSD.py", line 88, in validateFileSystemFeatures 

File "/usr/share/vdsm/storage/outOfProcess.py", line 320, in directTouch 

File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 507, in touch 

File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 391, in _sendCommand 

OSError: [Errno 2] No such file or directory 

Thread-205::DEBUG::2015-07-20 16:23:30,659::task::885::Storage.TaskManager.Task::(_run) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::Task._run: 050e9378-ba78-4e6f-b986-0dda7bb09aa7 (7, u'd6df7930-342a-493a-b70b-fb1c52b0828c', u'ovirtprd01', u'superstore001-stor.cs.example.com:/ovirtprd01', 1, u'3') {} failed - stopping task 

Thread-205::DEBUG::2015-07-20 16:23:30,659::task::1217::Storage.TaskManager.Task::(stop) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::stopping in state preparing (force False) 

Thread-205::DEBUG::2015-07-20 16:23:30,659::task::993::Storage.TaskManager.Task::(_decref) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::ref 1 aborting True 

Thread-205:: INFO::2015-07-20 16:23:30,659::task::1171::Storage.TaskManager.Task::(prepare) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::aborting: Task is aborted: u'[Errno 2] No such file or directory' - code 100 

Thread-205::DEBUG::2015-07-20 16:23:30,659::task::1176::Storage.TaskManager.Task::(prepare) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::Prepare: aborted: [Errno 2] No such file or directory 

Thread-205::DEBUG::2015-07-20 16:23:30,660::task::993::Storage.TaskManager.Task::(_decref) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::ref 0 aborting True 

Thread-205::DEBUG::2015-07-20 16:23:30,660::task::928::Storage.TaskManager.Task::(_doAbort) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::Task._doAbort: force False 

Thread-205::DEBUG::2015-07-20 16:23:30,660::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 

Thread-205::DEBUG::2015-07-20 16:23:30,660::task::595::Storage.TaskManager.Task::(_updateState) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::moving from state preparing -> state aborting 

Thread-205::DEBUG::2015-07-20 16:23:30,660::task::550::Storage.TaskManager.Task::(__state_aborting) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::_aborting: recover policy none 

Thread-205::DEBUG::2015-07-20 16:23:30,660::task::595::Storage.TaskManager.Task::(_updateState) Task=`050e9378-ba78-4e6f-b986-0dda7bb09aa7`::moving from state aborting -> state failed 

Thread-205::DEBUG::2015-07-20 16:23:30,660::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 

Thread-205::DEBUG::2015-07-20 16:23:30,660::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 

Thread-205::ERROR::2015-07-20 16:23:30,660::dispatcher::79::Storage.Dispatcher::(wrapper) [Errno 2] No such file or directory 

Traceback (most recent call last): 

File "/usr/share/vdsm/storage/dispatcher.py", line 71, in wrapper 

File "/usr/share/vdsm/storage/task.py", line 103, in wrapper 

File "/usr/share/vdsm/storage/task.py", line 1179, in prepare 

OSError: [Errno 2] No such file or directory 

Thread-205::DEBUG::2015-07-20 16:23:30,660::stompReactor::163::yajsonrpc.StompServer::(send) Sending response 

JsonRpc (StompReactor)::DEBUG::2015-07-20 16:23:30,945::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> 

JsonRpcServer::DEBUG::2015-07-20 16:23:30,945::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request 

Thread-208::DEBUG::2015-07-20 16:23:30,945::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StoragePool.disconnectStorageServer' in bridge with {u'connectionParams': [{u'id': u'ece6ad42-be90-4980-98dd-d7ae12cc6709', u'connection': u'superstore001-stor.cs.example.com:/ovirtprd01', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password': u'', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-000000000000', u'domainType': 7} 

Thread-208::DEBUG::2015-07-20 16:23:30,946::task::595::Storage.TaskManager.Task::(_updateState) Task=`9f38f64c-24e9-4f6b-9487-39384266bc71`::moving from state init -> state preparing 

Thread-208:: INFO::2015-07-20 16:23:30,947::logUtils::44::dispatcher::(wrapper) Run and protect: disconnectStorageServer(domType=7, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'port': u'', u'connection': u'superstore001-stor.cs.example.com:/ovirtprd01', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password': '******', u'id': u'ece6ad42-be90-4980-98dd-d7ae12cc6709'}], options=None) 

Thread-208::DEBUG::2015-07-20 16:23:30,947::mount::227::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/superstore001-stor.cs.example.com:_ovirtprd01 (cwd None) 

Thread-208::DEBUG::2015-07-20 16:23:30,958::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) 

Thread-208::DEBUG::2015-07-20 16:23:30,958::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method 

Thread-208::DEBUG::2015-07-20 16:23:30,958::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) 

Thread-208::DEBUG::2015-07-20 16:23:30,958::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method 

Thread-208::DEBUG::2015-07-20 16:23:30,958::iscsi::403::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds 

Thread-208::DEBUG::2015-07-20 16:23:30,958::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) 

Thread-208::DEBUG::2015-07-20 16:23:30,971::misc::751::Storage.SamplingMethod::(__call__) Returning last result 

Thread-208::DEBUG::2015-07-20 16:23:30,971::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan) 

Thread-208::DEBUG::2015-07-20 16:23:30,971::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method 

Thread-208::DEBUG::2015-07-20 16:23:30,971::hba::53::Storage.HBA::(rescan) Starting scan 

Thread-208::DEBUG::2015-07-20 16:23:30,971::utils::739::Storage.HBA::(execCmd) /usr/bin/sudo -n /usr/libexec/vdsm/fc-scan (cwd None) 

Thread-208::DEBUG::2015-07-20 16:23:30,989::hba::66::Storage.HBA::(rescan) Scan finished 

Thread-208::DEBUG::2015-07-20 16:23:30,989::misc::751::Storage.SamplingMethod::(__call__) Returning last result 

Thread-208::DEBUG::2015-07-20 16:23:30,989::multipath::128::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /sbin/multipath (cwd None) 

Thread-208::DEBUG::2015-07-20 16:23:31,025::multipath::128::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 

Thread-208::DEBUG::2015-07-20 16:23:31,026::lvm::489::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex 

Thread-208::DEBUG::2015-07-20 16:23:31,026::lvm::491::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex 

Thread-208::DEBUG::2015-07-20 16:23:31,026::lvm::500::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex 

Thread-208::DEBUG::2015-07-20 16:23:31,026::lvm::502::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex 

Thread-208::DEBUG::2015-07-20 16:23:31,026::lvm::520::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex 

Thread-208::DEBUG::2015-07-20 16:23:31,026::lvm::522::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex 

Thread-208::DEBUG::2015-07-20 16:23:31,026::misc::751::Storage.SamplingMethod::(__call__) Returning last result 

Thread-208:: INFO::2015-07-20 16:23:31,026::logUtils::47::dispatcher::(wrapper) Run and protect: disconnectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': u'ece6ad42-be90-4980-98dd-d7ae12cc6709'}]} 

Thread-208::DEBUG::2015-07-20 16:23:31,026::task::1191::Storage.TaskManager.Task::(prepare) Task=`9f38f64c-24e9-4f6b-9487-39384266bc71`::finished: {'statuslist': [{'status': 0, 'id': u'ece6ad42-be90-4980-98dd-d7ae12cc6709'}]} 

Thread-208::DEBUG::2015-07-20 16:23:31,027::task::595::Storage.TaskManager.Task::(_updateState) Task=`9f38f64c-24e9-4f6b-9487-39384266bc71`::moving from state preparing -> state finished 

Thread-208::DEBUG::2015-07-20 16:23:31,027::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} 

Thread-208::DEBUG::2015-07-20 16:23:31,027::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} 

Thread-208::DEBUG::2015-07-20 16:23:31,027::task::993::Storage.TaskManager.Task::(_decref) Task=`9f38f64c-24e9-4f6b-9487-39384266bc71`::ref 0 aborting False 

Thread-208::DEBUG::2015-07-20 16:23:31,027::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'StoragePool.disconnectStorageServer' in bridge with [{'status': 0, 'id': u'ece6ad42-be90-4980-98dd-d7ae12cc6709'}] 

Thread-208::DEBUG::2015-07-20 16:23:31,027::stompReactor::163::yajsonrpc.StompServer::(send) Sending response 

JsonRpc (StompReactor)::DEBUG::2015-07-20 16:23:31,505::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> 

JsonRpcServer::DEBUG::2015-07-20 16:23:31,506::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request 

Thread-209::DEBUG::2015-07-20 16:23:31,507::stompReactor::163::yajsonrpc.StompServer::(send) Sending response 

JsonRpc (StompReactor)::DEBUG::2015-07-20 16:23:34,537::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'> 

JsonRpcServer::DEBUG::2015-07-20 16:23:34,537::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request 

Thread-210::DEBUG::2015-07-20 16:23:34,538::stompReactor::163::yajsonrpc.StompServer::(send) Sending response 




Thanks in advance, 




Jurriën 


This message (including any attachments) may contain information that is privileged or confidential. If you are not the intended recipient, please notify the sender and delete this email immediately from your systems and destroy all copies of it. You may not, directly or indirectly, use, disclose, distribute, print or copy this email or any part of it if you are not the intended recipient 

_______________________________________________
Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users 




_______________________________________________ 
Users mailing list 
Users at ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 


</blockquote>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150721/e9f99584/attachment-0001.html>


More information about the Users mailing list