This is a multi-part message in MIME format.
--------------090502070204020205010802
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
+ ovirt-users
Some clarity on your setup -
sjcvhost03 - is this your arbiter node and ovirt management node? And
are you running a compute + storage on the same nodes - i.e,
sjcstorage01, sjcstorage02, sjcvhost03 (arbiter).
CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587
- fails with Error creating a storage domain's metadata: ("create meta
file 'outbox' failed: [Errno 5] Input/output error",
Are the vdsm logs you provided from sjcvhost03? There are no errors to
be seen in the gluster log you provided. Could you provide mount log
from sjcvhost03 (at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log most likely)
If possible, /var/log/glusterfs/* from the 3 storage nodes.
thanks
sahina
On 09/23/2015 05:02 AM, Brett Stevens wrote:
Hi Sahina,
as requested here is some logs taken during a domain create.
2015-09-22 18:46:44,320 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88) [] START,
GlusterVolumesListVDSCommand(HostName = sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 2205ff1
2015-09-22 18:46:44,413 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not associate brick
'sjcstorage01:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not associate brick
'sjcstorage02:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 18:46:44,418 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88) [] FINISH,
GlusterVolumesListVDSCommand, return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
log id: 2205ff1
2015-09-22 18:46:45,215 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Lock Acquired to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2015-09-22 18:46:45,230 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Running command:
AddStorageServerConnectionCommand internal: false. Entities affected
: ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2015-09-22 18:46:45,233 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [5099cda3] START,
ConnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='null',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs',
mountOptions='null', nfsVersion='null', nfsRetrans='null',
nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log
id: 6a112292
2015-09-22 18:46:48,065 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [5099cda3] FINISH, ConnectStorageServerVDSCommand,
return: {00000000-0000-0000-0000-000000000000=0}, log id: 6a112292
2015-09-22 18:46:48,073 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Lock freed to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2015-09-22 18:46:48,188 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Running command:
AddGlusterFsStorageDomainCommand internal: false. Entities affected :
ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2015-09-22 18:46:48,206 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23) [6410419] START,
ConnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs',
mountOptions='null', nfsVersion='null', nfsRetrans='null',
nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log
id: 38a2b0d
2015-09-22 18:46:48,219 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23) [6410419] FINISH, ConnectStorageServerVDSCommand,
return: {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 38a2b0d
2015-09-22 18:46:48,221 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] START,
CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587
2015-09-22 18:46:48,744 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [6410419] Correlation ID: null, Call Stack: null,
Custom Event ID: -1, Message: VDSM sjcvhost03 command failed: Error
creating a storage domain's metadata: ("create meta file 'outbox'
failed: [Errno 5] Input/output error",)
2015-09-22 18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
return value 'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc
[code=362, message=Error creating a storage domain's metadata:
("create meta file 'outbox' failed: [Errno 5] Input/output
error",)]]'
2015-09-22 18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] HostName = sjcvhost03
2015-09-22 18:46:48,745 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] Command
'CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'})' execution failed: VDSGenericException:
VDSErrorException: Failed in vdscommand to CreateStorageDomainVDS,
error = Error creating a storage domain's metadata: ("create meta file
'outbox' failed: [Errno 5] Input/output error",)
2015-09-22 18:46:48,745 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] FINISH, CreateStorageDomainVDSCommand, log
id: b9fe587
2015-09-22 18:46:48,745 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
failed: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed in vdscommand to
CreateStorageDomainVDS, error = Error creating a storage domain's
metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
error",) (Failed with error StorageDomainMetadataCreationError and
code 362)
2015-09-22 18:46:48,755 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating NEW_ENTITY_ID
of org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
2015-09-22 18:46:48,758 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating NEW_ENTITY_ID
of org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
2015-09-22 18:46:48,769 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Transaction rolled-back for command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.
2015-09-22 18:46:48,784 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [6410419] Correlation ID: 6410419, Job ID:
78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack: null, Custom Event
ID: -1, Message: Failed to add Storage Domain sjcvmstore. (User:
admin@internal)
2015-09-22 18:46:48,996 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Lock Acquired to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2015-09-22 18:46:49,018 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Running command:
RemoveStorageServerConnectionCommand internal: false. Entities
affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2015-09-22 18:46:49,024 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Removing connection
'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from database
2015-09-22 18:46:49,026 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32) [1635a244] START,
DisconnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs',
mountOptions='null', nfsVersion='null', nfsRetrans='null',
nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log
id: 39d3b568
2015-09-22 18:46:49,248 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32) [1635a244] FINISH,
DisconnectStorageServerVDSCommand, return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 39d3b568
2015-09-22 18:46:49,252 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Lock freed to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2015-09-22 18:46:49,431 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] START,
GlusterVolumesListVDSCommand(HostName = sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 17014ae8
2015-09-22 18:46:49,511 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not associate brick
'sjcstorage01:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 18:46:49,515 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not associate brick
'sjcstorage02:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 18:46:49,516 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 18:46:49,516 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] FINISH,
GlusterVolumesListVDSCommand, return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
log id: 17014ae8
ovirt engine thinks that sjcstorage01 is sjcstorage01, its all testbed
at the moment and is all short names, defined in /etc/hosts (all
copied to each server for consistancy)
volume info for vmstore is
Status of volume: vmstore
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick sjcstorage01:/export/vmstore/brick01 49157 0 Y 7444
Brick sjcstorage02:/export/vmstore/brick01 49157 0 Y 4063
Brick sjcvhost02:/export/vmstore/brick01 49156 0 Y 3243
NFS Server on localhost 2049 0 Y 3268
Self-heal Daemon on localhost N/A N/A Y 3284
NFS Server on sjcstorage01 2049 0 Y 7463
Self-heal Daemon on sjcstorage01 N/A N/A Y
7472
NFS Server on sjcstorage02 2049 0 Y 4082
Self-heal Daemon on sjcstorage02 N/A N/A Y
4090
Task Status of Volume vmstore
------------------------------------------------------------------------------
There are no active volume tasks
vdsm logs from time the domain is added
hread-789::DEBUG::2015-09-22
19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from state init ->
state preparing
Thread-790::INFO::2015-09-22
19:12:07,797::logUtils::48::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-790::INFO::2015-09-22
19:12:07,797::logUtils::51::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-790::DEBUG::2015-09-22
19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished: {}
Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from state
preparing -> state finished
Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-790::DEBUG::2015-09-22
19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0 aborting False
Thread-790::DEBUG::2015-09-22
19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Reactor thread::INFO::2015-09-22
19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:52510 <
http://127.0.0.1:52510>
Reactor thread::DEBUG::2015-09-22
19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-09-22
19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:52510 <
http://127.0.0.1:52510>
Reactor thread::DEBUG::2015-09-22
19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 52510)
BindingXMLRPC::INFO::2015-09-22
19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:52510 <
http://127.0.0.1:52510>
Thread-791::INFO::2015-09-22
19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:52510 <
http://127.0.0.1:52510> started
Thread-791::INFO::2015-09-22
19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:52510 <
http://127.0.0.1:52510> stopped
Thread-792::DEBUG::2015-09-22
19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from state init ->
state preparing
Thread-793::INFO::2015-09-22
19:12:22,832::logUtils::48::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-793::INFO::2015-09-22
19:12:22,832::logUtils::51::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-793::DEBUG::2015-09-22
19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished: {}
Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from state
preparing -> state finished
Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-793::DEBUG::2015-09-22
19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0 aborting False
Thread-793::DEBUG::2015-09-22
19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Reactor thread::INFO::2015-09-22
19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:52511 <
http://127.0.0.1:52511>
Reactor thread::DEBUG::2015-09-22
19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-09-22
19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:52511 <
http://127.0.0.1:52511>
Reactor thread::DEBUG::2015-09-22
19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 52511)
BindingXMLRPC::INFO::2015-09-22
19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:52511 <
http://127.0.0.1:52511>
Thread-794::INFO::2015-09-22
19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:52511 <
http://127.0.0.1:52511> started
Thread-794::INFO::2015-09-22
19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:52511 <
http://127.0.0.1:52511> stopped
Thread-795::DEBUG::2015-09-22
19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.connectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'00000000-0000-0000-0000-000000000000', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs', u'password': '********',
u'port': u''}],
u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}
Thread-795::DEBUG::2015-09-22
19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from state init ->
state preparing
Thread-795::INFO::2015-09-22
19:12:35,521::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
u'00000000-0000-0000-0000-000000000000', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs', u'password': '********',
u'port': u''}],
options=None)
Thread-795::DEBUG::2015-09-22
19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir) Creating
directory: /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore mode:
None
Thread-795::DEBUG::2015-09-22
19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo
-n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
-t glusterfs -o backup-volfile-servers=sjcstorage02:sjcvhost02
sjcstorage01:/vmstore
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd None)
Thread-795::DEBUG::2015-09-22
19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*
Thread-796::DEBUG::2015-09-22
19:12:35,707::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-797::DEBUG::2015-09-22
19:12:35,712::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
uuids: ()
Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
{41b75ca9-9783-42a7-9a23-10a2ae3cbb96: storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6: storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225: storage.glusterSD.findDomain}
Thread-795::INFO::2015-09-22
19:12:35,721::logUtils::51::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0,
'id': u'00000000-0000-0000-0000-000000000000'}]}
Thread-795::DEBUG::2015-09-22
19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished: {'statuslist':
[{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]}
Thread-795::DEBUG::2015-09-22
19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from state
preparing -> state finished
Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-795::DEBUG::2015-09-22
19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0 aborting False
Thread-795::DEBUG::2015-09-22
19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.connectStorageServer' in bridge with [{'status':
0, 'id': u'00000000-0000-0000-0000-000000000000'}]
Thread-795::DEBUG::2015-09-22
19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Thread-798::DEBUG::2015-09-22
19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.connectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs', u'password': '********',
u'port': u''}],
u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}
Thread-798::DEBUG::2015-09-22
19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from state init ->
state preparing
Thread-798::INFO::2015-09-22
19:12:35,776::logUtils::48::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs', u'password': '********',
u'port': u''}],
options=None)
Thread-798::DEBUG::2015-09-22
19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*
Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
uuids: ()
Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
{41b75ca9-9783-42a7-9a23-10a2ae3cbb96: storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6: storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225: storage.glusterSD.findDomain}
Thread-798::INFO::2015-09-22
19:12:35,782::logUtils::51::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0,
'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
Thread-798::DEBUG::2015-09-22
19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished: {'statuslist':
[{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
Thread-798::DEBUG::2015-09-22
19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from state
preparing -> state finished
Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-798::DEBUG::2015-09-22
19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0 aborting False
Thread-798::DEBUG::2015-09-22
19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.connectStorageServer' in bridge with [{'status':
0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
Thread-798::DEBUG::2015-09-22
19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Thread-801::DEBUG::2015-09-22
19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StorageDomain.create' in bridge with {u'name':
u'sjcvmstore01', u'domainType': 7, u'domainClass': 1,
u'typeArgs':
u'sjcstorage01:/vmstore', u'version': u'3',
u'storagedomainID':
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}
Thread-801::DEBUG::2015-09-22
19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state init ->
state preparing
Thread-801::INFO::2015-09-22
19:12:35,788::logUtils::48::dispatcher::(wrapper) Run and protect:
createStorageDomain(storageType=7,
sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
domainName=u'sjcvmstore01', typeSpecificArg=u'sjcstorage01:/vmstore',
domClass=1, domVersion=u'3', options=None)
Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__) Trying to
enter sampling method (storage.sdc.refreshStorage)
Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__) Trying to
enter sampling method (storage.iscsi.rescan)
Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI
scan, this will take up to 30 seconds
Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::743::Storage.SamplingMethod::(__call__) Returning
last result
Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::733::Storage.SamplingMethod::(__call__) Trying to
enter sampling method (storage.hba.rescan)
Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::736::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-801::DEBUG::2015-09-22
19:12:35,821::hba::56::Storage.HBA::(rescan) Starting scan
Thread-802::DEBUG::2015-09-22
19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Thread-801::DEBUG::2015-09-22
19:12:35,912::hba::62::Storage.HBA::(rescan) Scan finished
Thread-801::DEBUG::2015-09-22
19:12:35,912::misc::743::Storage.SamplingMethod::(__call__) Returning
last result
Thread-801::DEBUG::2015-09-22
19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n /usr/sbin/multipath (cwd None)
Thread-801::DEBUG::2015-09-22
19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
<err> = ''; <rc> = 0
Thread-801::DEBUG::2015-09-22
19:12:35,936::utils::661::root::(execCmd) /sbin/udevadm settle
--timeout=5 (cwd None)
Thread-801::DEBUG::2015-09-22
19:12:35,946::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-801::DEBUG::2015-09-22
19:12:35,948::misc::743::Storage.SamplingMethod::(__call__) Returning
last result
Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
Thread-801::DEBUG::2015-09-22
19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation
'lvm reload operation' got the operation mutex
Thread-801::DEBUG::2015-09-22
19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
/usr/sbin/lvm vgs --config ' devices { preferred_names =
["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
'\''r|.*|'\'' ] } global { locking_type=1
prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50
retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)
Thread-801::DEBUG::2015-09-22
19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
WARNING: lvmetad is running but disabled. Restart lvmetad before
enabling it!\n Volume group "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
not found\n Cannot process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5
Thread-801::WARNING::2015-09-22
19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
[' WARNING: lvmetad is running but disabled. Restart lvmetad before
enabling it!', ' Volume group "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
not found', ' Cannot process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']
Thread-801::DEBUG::2015-09-22
19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation
'lvm reload operation' released the operation mutex
Thread-801::ERROR::2015-09-22
19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 142, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 172, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)
Thread-801::INFO::2015-09-22
19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 domainName=sjcvmstore01
remotePath=sjcstorage01:/vmstore domClass=1
Thread-801::DEBUG::2015-09-22
19:12:36,015::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Thread-801::ERROR::2015-09-22
19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 2697, in createStorageDomain
domVersion)
File "/usr/share/vdsm/storage/nfsSD.py", line 84, in create
remotePath, storageType, version)
File "/usr/share/vdsm/storage/fileSD.py", line 264, in _prepareMetadata
"create meta file '%s' failed: %s" % (metaFile, str(e)))
StorageDomainMetadataCreationError: Error creating a storage domain's
metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
error",)
Thread-801::DEBUG::2015-09-22
19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3', u'sjcvmstore01',
u'sjcstorage01:/vmstore', 1, u'3') {} failed - stopping task
Thread-801::DEBUG::2015-09-22
19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping in state
preparing (force False)
Thread-801::DEBUG::2015-09-22
19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1 aborting True
Thread-801::INFO::2015-09-22
19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting: Task is
aborted: "Error creating a storage domain's metadata" - code 362
Thread-801::DEBUG::2015-09-22
19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare: aborted: Error
creating a storage domain's metadata
Thread-801::DEBUG::2015-09-22
19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0 aborting True
Thread-801::DEBUG::2015-09-22
19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort: force False
Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state
preparing -> state aborting
Thread-801::DEBUG::2015-09-22
19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting: recover policy
none
Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state
aborting -> state failed
Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-801::ERROR::2015-09-22
19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Error creating a storage domain\'s metadata: ("create
meta file \'outbox\' failed: [Errno 5] Input/output error",)',
'code':
362}}
Thread-801::DEBUG::2015-09-22
19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Thread-807::DEBUG::2015-09-22
19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.disconnectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs', u'password': '********',
u'port': u''}],
u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}
Thread-807::DEBUG::2015-09-22
19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from state init ->
state preparing
Thread-807::INFO::2015-09-22
19:12:36,182::logUtils::48::dispatcher::(wrapper) Run and protect:
disconnectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs', u'password': '********',
u'port': u''}],
options=None)
Thread-807::DEBUG::2015-09-22
19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo
-n /usr/bin/umount -f -l
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd None)
Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__) Trying to
enter sampling method (storage.sdc.refreshStorage)
Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__) Trying to
enter sampling method (storage.iscsi.rescan)
Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI
scan, this will take up to 30 seconds
Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::743::Storage.SamplingMethod::(__call__) Returning
last result
Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::733::Storage.SamplingMethod::(__call__) Trying to
enter sampling method (storage.hba.rescan)
Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::736::Storage.SamplingMethod::(__call__) Got in to
sampling method
Thread-807::DEBUG::2015-09-22
19:12:36,258::hba::56::Storage.HBA::(rescan) Starting scan
Thread-807::DEBUG::2015-09-22
19:12:36,350::hba::62::Storage.HBA::(rescan) Scan finished
Thread-807::DEBUG::2015-09-22
19:12:36,350::misc::743::Storage.SamplingMethod::(__call__) Returning
last result
Thread-807::DEBUG::2015-09-22
19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n /usr/sbin/multipath (cwd None)
Thread-807::DEBUG::2015-09-22
19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
<err> = ''; <rc> = 0
Thread-807::DEBUG::2015-09-22
19:12:36,374::utils::661::root::(execCmd) /sbin/udevadm settle
--timeout=5 (cwd None)
Thread-807::DEBUG::2015-09-22
19:12:36,383::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> =
0
Thread-807::DEBUG::2015-09-22
19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex
Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation mutex
Thread-807::DEBUG::2015-09-22
19:12:36,386::misc::743::Storage.SamplingMethod::(__call__) Returning
last result
Thread-807::INFO::2015-09-22
19:12:36,386::logUtils::51::dispatcher::(wrapper) Run and protect:
disconnectStorageServer, Return response: {'statuslist': [{'status':
0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
Thread-807::DEBUG::2015-09-22
19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished: {'statuslist':
[{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
Thread-807::DEBUG::2015-09-22
19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from state
preparing -> state finished
Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-807::DEBUG::2015-09-22
19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0 aborting False
Thread-807::DEBUG::2015-09-22
19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.disconnectStorageServer' in bridge with
[{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
Thread-807::DEBUG::2015-09-22
19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from state init ->
state preparing
Thread-808::INFO::2015-09-22
19:12:37,868::logUtils::48::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-808::INFO::2015-09-22
19:12:37,868::logUtils::51::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-808::DEBUG::2015-09-22
19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished: {}
Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from state
preparing -> state finished
Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-808::DEBUG::2015-09-22
19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0 aborting False
Thread-808::DEBUG::2015-09-22
19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Reactor thread::INFO::2015-09-22
19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:52512 <
http://127.0.0.1:52512>
Reactor thread::DEBUG::2015-09-22
19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-09-22
19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:52512 <
http://127.0.0.1:52512>
Reactor thread::DEBUG::2015-09-22
19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 52512)
BindingXMLRPC::INFO::2015-09-22
19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:52512 <
http://127.0.0.1:52512>
Thread-809::INFO::2015-09-22
19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:52512 <
http://127.0.0.1:52512> started
Thread-809::INFO::2015-09-22
19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:52512 <
http://127.0.0.1:52512> stopped
Thread-810::DEBUG::2015-09-22
19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Thread-811::DEBUG::2015-09-22
19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from state init ->
state preparing
Thread-811::INFO::2015-09-22
19:12:52,902::logUtils::48::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-811::INFO::2015-09-22
19:12:52,902::logUtils::51::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-811::DEBUG::2015-09-22
19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished: {}
Thread-811::DEBUG::2015-09-22
19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from state
preparing -> state finished
Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-811::DEBUG::2015-09-22
19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0 aborting False
Thread-811::DEBUG::2015-09-22
19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Reactor thread::INFO::2015-09-22
19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:52513 <
http://127.0.0.1:52513>
Reactor thread::DEBUG::2015-09-22
19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-09-22
19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:52513 <
http://127.0.0.1:52513>
Reactor thread::DEBUG::2015-09-22
19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 52513)
BindingXMLRPC::INFO::2015-09-22
19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:52513 <
http://127.0.0.1:52513>
Thread-812::INFO::2015-09-22
19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:52513 <
http://127.0.0.1:52513> started
Thread-812::INFO::2015-09-22
19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:52513 <
http://127.0.0.1:52513> stopped
Thread-813::DEBUG::2015-09-22
19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Thread-814::DEBUG::2015-09-22
19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from state init ->
state preparing
Thread-814::INFO::2015-09-22
19:13:07,935::logUtils::48::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-814::INFO::2015-09-22
19:13:07,935::logUtils::51::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {}
Thread-814::DEBUG::2015-09-22
19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished: {}
Thread-814::DEBUG::2015-09-22
19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from state
preparing -> state finished
Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-814::DEBUG::2015-09-22
19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0 aborting False
Thread-814::DEBUG::2015-09-22
19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
Reactor thread::INFO::2015-09-22
19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from 127.0.0.1:52515 <
http://127.0.0.1:52515>
Reactor thread::DEBUG::2015-09-22
19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11
Reactor thread::INFO::2015-09-22
19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from 127.0.0.1:52515 <
http://127.0.0.1:52515>
Reactor thread::DEBUG::2015-09-22
19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
over http detected from ('127.0.0.1', 52515)
BindingXMLRPC::INFO::2015-09-22
19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
request handler for 127.0.0.1:52515 <
http://127.0.0.1:52515>
Thread-815::INFO::2015-09-22
19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:52515 <
http://127.0.0.1:52515> started
Thread-815::INFO::2015-09-22
19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for 127.0.0.1:52515 <
http://127.0.0.1:52515> stopped
Thread-816::DEBUG::2015-09-22
19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send) Sending
response
gluster logs
+------------------------------------------------------------------------------+
1: volume vmstore-client-0
2: type protocol/client
3: option ping-timeout 42
4: option remote-host sjcstorage01
5: option remote-subvolume /export/vmstore/brick01
6: option transport-type socket
7: option send-gids true
8: end-volume
9:
10: volume vmstore-client-1
11: type protocol/client
12: option ping-timeout 42
13: option remote-host sjcstorage02
14: option remote-subvolume /export/vmstore/brick01
15: option transport-type socket
16: option send-gids true
17: end-volume
18:
19: volume vmstore-client-2
20: type protocol/client
21: option ping-timeout 42
22: option remote-host sjcvhost02
23: option remote-subvolume /export/vmstore/brick01
24: option transport-type socket
25: option send-gids true
26: end-volume
27:
28: volume vmstore-replicate-0
29: type cluster/replicate
30: option arbiter-count 1
31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
32: end-volume
33:
34: volume vmstore-dht
35: type cluster/distribute
36: subvolumes vmstore-replicate-0
37: end-volume
38:
39: volume vmstore-write-behind
40: type performance/write-behind
41: subvolumes vmstore-dht
42: end-volume
43:
44: volume vmstore-read-ahead
45: type performance/read-ahead
46: subvolumes vmstore-write-behind
47: end-volume
48:
49: volume vmstore-readdir-ahead
50: type performance/readdir-ahead
52: end-volume
53:
54: volume vmstore-io-cache
55: type performance/io-cache
56: subvolumes vmstore-readdir-ahead
57: end-volume
58:
59: volume vmstore-quick-read
60: type performance/quick-read
61: subvolumes vmstore-io-cache
62: end-volume
63:
64: volume vmstore-open-behind
65: type performance/open-behind
66: subvolumes vmstore-quick-read
67: end-volume
68:
69: volume vmstore-md-cache
70: type performance/md-cache
71: subvolumes vmstore-open-behind
72: end-volume
73:
74: volume vmstore
75: type debug/io-stats
76: option latency-measurement off
77: option count-fop-hits off
78: subvolumes vmstore-md-cache
79: end-volume
80:
81: volume meta-autoload
82: type meta
83: subvolumes vmstore
84: end-volume
85:
+------------------------------------------------------------------------------+
[2015-09-22 05:29:07.586205] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0: changing port to 49153 (from 0)
[2015-09-22 05:29:07.586325] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1: changing port to 49153 (from 0)
[2015-09-22 05:29:07.586480] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
[2015-09-22 05:29:07.595052] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2015-09-22 05:29:07.595397] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2015-09-22 05:29:07.595576] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2015-09-22 05:29:07.595721] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
Connected to vmstore-client-0, attached to remote volume
'/export/vmstore/brick01'.
[2015-09-22 05:29:07.595738] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
Server and Client lk-version numbers are not same, reopening the fds
[2015-09-22 05:29:07.596044] I [MSGID: 108005]
[afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
'vmstore-client-0' came back up; going online.
[2015-09-22 05:29:07.596170] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
Connected to vmstore-client-1, attached to remote volume
'/export/vmstore/brick01'.
[2015-09-22 05:29:07.596189] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
Server and Client lk-version numbers are not same, reopening the fds
[2015-09-22 05:29:07.596495] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
Connected to vmstore-client-2, attached to remote volume :
[2015-09-22 05:29:07.596189] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
Server and Client lk-version numbers are not same, reopening the fds
[2015-09-22 05:29:07.596495] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
Connected to vmstore-client-2, attached to remote volume
'/export/vmstore/brick01'.
[2015-09-22 05:29:07.596506] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
Server and Client lk-version numbers are not same, reopening the fds
[2015-09-22 05:29:07.608758] I [fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to graph 0
[2015-09-22 05:29:07.608910] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
Server lk version = 1
[2015-09-22 05:29:07.608936] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
Server lk version = 1
[2015-09-22 05:29:07.608950] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
Server lk version = 1
[2015-09-22 05:29:07.609695] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 2
[2015-09-22 05:29:07.609868] I [fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
kernel 7.22
[2015-09-22 05:29:07.616577] I [MSGID: 109063]
[dht-layout.c:702:dht_layout_normalize] 0-vmstore-dht: Found anomalies
in / (gfid = 00000000-0000-0000-0000-000000000001). Holes=1 overlaps=0
[2015-09-22 05:29:07.620230] I [MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
Setting layout of / with [Subvol_name: vmstore-replicate-0, Err: -1 ,
Start: 0 , Stop: 4294967295 , Hash: 1 ],
[2015-09-22 05:29:08.122415] W [fuse-bridge.c:1230:fuse_err_cbk]
0-glusterfs-fuse: 26: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data
available)
[2015-09-22 05:29:08.137359] I [MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
Setting layout of /061b73d5-ae59-462e-b674-ea9c60d436c2 with
[Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 , Stop:
4294967295 , Hash: 1 ],
[2015-09-22 05:29:08.145835] I [MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
Setting layout of /061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
[Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 , Stop:
4294967295 , Hash: 1 ],
[2015-09-22 05:30:57.897819] I [MSGID: 100030]
[glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
/usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
[2015-09-22 05:30:57.909889] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2015-09-22 05:30:57.923087] I [MSGID: 114020] [client.c:2118:notify]
0-vmstore-client-0: parent translators are ready, attempting connect
on transport
[2015-09-22 05:30:57.925701] I [MSGID: 114020] [client.c:2118:notify]
0-vmstore-client-1: parent translators are ready, attempting connect
on transport
[2015-09-22 05:30:57.927984] I [MSGID: 114020] [client.c:2118:notify]
0-vmstore-client-2: parent translators are ready, attempting connect
on transport
Final graph:
+------------------------------------------------------------------------------+
1: volume vmstore-client-0
2: type protocol/client
3: option ping-timeout 42
4: option remote-host sjcstorage01
5: option remote-subvolume /export/vmstore/brick01
6: option transport-type socket
7: option send-gids true
8: end-volume
9:
10: volume vmstore-client-1
11: type protocol/client
12: option ping-timeout 42
13: option remote-host sjcstorage02
14: option remote-subvolume /export/vmstore/brick01
15: option transport-type socket
16: option send-gids true
17: end-volume
18:
19: volume vmstore-client-2
20: type protocol/client
21: option ping-timeout 42
22: option remote-host sjcvhost02
23: option remote-subvolume /export/vmstore/brick01
24: option transport-type socket
25: option send-gids true
26: end-volume
27:
28: volume vmstore-replicate-0
29: type cluster/replicate
30: option arbiter-count 1
31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
32: end-volume
33:
34: volume vmstore-dht
35: type cluster/distribute
36: subvolumes vmstore-replicate-0
37: end-volume
38:
39: volume vmstore-write-behind
40: type performance/write-behind
41: subvolumes vmstore-dht
42: end-volume
43:
44: volume vmstore-read-ahead
45: type performance/read-ahead
46: subvolumes vmstore-write-behind
47: end-volume
48:
49: volume vmstore-readdir-ahead
50: type performance/readdir-ahead
51: subvolumes vmstore-read-ahead
52: end-volume
53:
54: volume vmstore-io-cache
55: type performance/io-cache
56: subvolumes vmstore-readdir-ahead
57: end-volume
58:
59: volume vmstore-quick-read
60: type performance/quick-read
61: subvolumes vmstore-io-cache
62: end-volume
63:
64: volume vmstore-open-behind
65: type performance/open-behind
66: subvolumes vmstore-quick-read
67: end-volume
68:
69: volume vmstore-md-cache
70: type performance/md-cache
71: subvolumes vmstore-open-behind
72: end-volume
73:
74: volume vmstore
75: type debug/io-stats
76: option latency-measurement off
77: option count-fop-hits off
78: subvolumes vmstore-md-cache
79: end-volume
80:
81: volume meta-autoload
82: type meta
83: subvolumes vmstore
84: end-volume
85:
+------------------------------------------------------------------------------+
[2015-09-22 05:30:57.934021] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0: changing port to 49153 (from 0)
[2015-09-22 05:30:57.934145] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1: changing port to 49153 (from 0)
[2015-09-22 05:30:57.934491] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
[2015-09-22 05:30:57.942198] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2015-09-22 05:30:57.942545] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2015-09-22 05:30:57.942659] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2015-09-22 05:30:57.942797] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
Connected to vmstore-client-0, attached to remote volume
'/export/vmstore/brick01'.
[2015-09-22 05:30:57.942808] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
Server and Client lk-version numbers are not same, reopening the fds
[2015-09-22 05:30:57.943036] I [MSGID: 108005]
[afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
'vmstore-client-0' came back up; going online.
[2015-09-22 05:30:57.943078] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
Connected to vmstore-client-1, attached to remote volume
'/export/vmstore/brick01'.
[2015-09-22 05:30:57.943086] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
Server and Client lk-version numbers are not same, reopening the fds
[2015-09-22 05:30:57.943292] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
Connected to vmstore-client-2, attached to remote volume
'/export/vmstore/brick01'.
[2015-09-22 05:30:57.943302] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
Server and Client lk-version numbers are not same, reopening the fds
[2015-09-22 05:30:57.953887] I [fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to graph 0
[2015-09-22 05:30:57.954071] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
Server lk version = 1
[2015-09-22 05:30:57.954105] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
Server lk version = 1
[2015-09-22 05:30:57.954124] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
Server lk version = 1
[2015-09-22 05:30:57.955282] I [fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
kernel 7.22
[2015-09-22 05:30:57.955738] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 2
[2015-09-22 05:30:57.970232] I [fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
[2015-09-22 05:30:57.970834] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f187139fdf5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f1872a09785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f1872a09609] ) 0-:
received signum (15), shutting down
[2015-09-22 05:30:57.970848] I [fuse-bridge.c:5595:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
[2015-09-22 05:30:58.420973] I [fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
[2015-09-22 05:30:58.421355] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f8267cd4df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f826933e785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f826933e609] ) 0-:
received signum (15), shutting down
[2015-09-22 05:30:58.421369] I [fuse-bridge.c:5595:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
[2015-09-22 05:31:09.534410] I [MSGID: 100030]
[glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
/usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
[2015-09-22 05:31:09.545686] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2015-09-22 05:31:09.553019] I [MSGID: 114020] [client.c:2118:notify]
0-vmstore-client-0: parent translators are ready, attempting connect
on transport
[2015-09-22 05:31:09.555552] I [MSGID: 114020] [client.c:2118:notify]
0-vmstore-client-1: parent translators are ready, attempting connect
on transport
[2015-09-22 05:31:09.557989] I [MSGID: 114020] [client.c:2118:notify]
0-vmstore-client-2: parent translators are ready, attempting connect
on transport
Final graph:
+------------------------------------------------------------------------------+
1: volume vmstore-client-0
2: type protocol/client
3: option ping-timeout 42
4: option remote-host sjcstorage01
5: option remote-subvolume /export/vmstore/brick01
6: option transport-type socket
7: option send-gids true
8: end-volume
9:
10: volume vmstore-client-1
11: type protocol/client
12: option ping-timeout 42
13: option remote-host sjcstorage02
14: option remote-subvolume /export/vmstore/brick01
15: option transport-type socket
16: option send-gids true
17: end-volume
18:
19: volume vmstore-client-2
20: type protocol/client
21: option ping-timeout 42
22: option remote-host sjcvhost02
23: option remote-subvolume /export/vmstore/brick01
24: option transport-type socket
25: option send-gids true
26: end-volume
27:
28: volume vmstore-replicate-0
29: type cluster/replicate
30: option arbiter-count 1
31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
32: end-volume
33:
34: volume vmstore-dht
35: type cluster/distribute
36: subvolumes vmstore-replicate-0
37: end-volume
38:
39: volume vmstore-write-behind
40: type performance/write-behind
41: subvolumes vmstore-dht
42: end-volume
43:
44: volume vmstore-read-ahead
45: type performance/read-ahead
46: subvolumes vmstore-write-behind
47: end-volume
48:
49: volume vmstore-readdir-ahead
50: type performance/readdir-ahead
51: subvolumes vmstore-read-ahead
52: end-volume
53:
54: volume vmstore-io-cache
55: type performance/io-cache
56: subvolumes vmstore-readdir-ahead
57: end-volume
58:
59: volume vmstore-quick-read
60: type performance/quick-read
61: subvolumes vmstore-io-cache
62: end-volume
63:
64: volume vmstore-open-behind
65: type performance/open-behind
66: subvolumes vmstore-quick-read
67: end-volume
68:
69: volume vmstore-md-cache
70: type performance/md-cache
71: subvolumes vmstore-open-behind
72: end-volume
73:
74: volume vmstore
75: type debug/io-stats
76: option latency-measurement off
77: option count-fop-hits off
78: subvolumes vmstore-md-cache
79: end-volume
80:
81: volume meta-autoload
82: type meta
83: subvolumes vmstore
84: end-volume
85:
+------------------------------------------------------------------------------+
[2015-09-22 05:31:09.563262] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0: changing port to 49153 (from 0)
[2015-09-22 05:31:09.563431] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1: changing port to 49153 (from 0)
[2015-09-22 05:31:09.563877] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2: changing port to 49153 (from 0)
[2015-09-22 05:31:09.572443] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2015-09-22 05:31:09.572599] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2015-09-22 05:31:09.572742] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2015-09-22 05:31:09.573165] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
Connected to vmstore-client-1, attached to remote volume
'/export/vmstore/brick01'.
[2015-09-22 05:31:09.573186] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
Server and Client lk-version numbers are not same, reopening the fds
[2015-09-22 05:31:09.573395] I [MSGID: 108005]
[afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
'vmstore-client-1' came back up; going online.
[2015-09-22 05:31:09.573427] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
Connected to vmstore-client-0, attached to remote volume
'/export/vmstore/brick01'.
[2015-09-22 05:31:09.573435] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
Server and Client lk-version numbers are not same, reopening the fds
[2015-09-22 05:31:09.573754] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
Connected to vmstore-client-2, attached to remote volume
'/export/vmstore/brick01'.
[2015-09-22 05:31:09.573783] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
Server and Client lk-version numbers are not same, reopen:
[2015-09-22 05:31:09.577192] I [fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to graph 0
[2015-09-22 05:31:09.577302] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
Server lk version = 1
[2015-09-22 05:31:09.577325] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
Server lk version = 1
[2015-09-22 05:31:09.577339] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
Server lk version = 1
[2015-09-22 05:31:09.578125] I [fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
kernel 7.22
[2015-09-22 05:31:09.578636] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 2
[2015-09-22 05:31:10.073698] I [fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
[2015-09-22 05:31:10.073977] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f6b9ba88df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f6b9d0f2785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f6b9d0f2609] ) 0-:
received signum (15), shutting down
[2015-09-22 05:31:10.073993] I [fuse-bridge.c:5595:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
[2015-09-22 05:31:20.184700] I [MSGID: 100030]
[glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
/usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
[2015-09-22 05:31:20.194928] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2015-09-22 05:31:20.200701] I [MSGID: 114020] [client.c:2118:notify]
0-vmstore-client-0: parent translators are ready, attempting connect
on transport
[2015-09-22 05:31:20.203110] I [MSGID: 114020] [client.c:2118:notify]
0-vmstore-client-1: parent translators are ready, attempting connect
on transport
[2015-09-22 05:31:20.205708] I [MSGID: 114020] [client.c:2118:notify]
0-vmstore-client-2: parent translators are ready, attempting connect
on transport
Final graph:
Hope this helps.
thanks again
Brett Stevens
On Tue, Sep 22, 2015 at 10:14 PM, Sahina Bose <sabose(a)redhat.com
<mailto:sabose@redhat.com>> wrote:
On 09/22/2015 02:17 PM, Brett Stevens wrote:
> Hi. First time on the lists. I've searched for this but no luck
> so sorry if this has been covered before.
>
> Im working with the latest 3.6 beta with the following
> infrastructure.
>
> 1 management host (to be used for a number of tasks so chose not
> to use self hosted, we are a school and will need to keep an eye
> on hardware costs)
> 2 compute nodes
> 2 gluster nodes
>
> so far built one gluster volume using the gluster cli to give me
> 2 nodes and one arbiter node (management host)
>
> so far, every time I create a volume, it shows up strait away on
> the ovirt gui. however no matter what I try, I cannot create or
> import it as a data domain.
>
> the current error in the ovirt gui is "Error while executing
> action AddGlusterFsStorageDomain: Error creating a storage
> domain's metadata"
Please provide vdsm and gluster logs
>
> logs, continuously rolling the following errors around
>
> Scheduler_Worker-53) [] START,
> GlusterVolumesListVDSCommand(HostName = sjcstorage02,
> GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 24198fbf
>
> 2015-09-22 03:57:29,903 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-53) [] Could not associate brick
> 'sjcstorage01:/export/vmstore/brick01' of volume
> '878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no
> gluster network found in cluster
> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
What is the hostname provided in ovirt engine for sjcstorage01 ?
Does this host have multiple nics?
Could you provide output of gluster volume info?
Please note, that these errors are not related to error in
creating storage domain. However, these errors could prevent you
from monitoring the state of gluster volume from oVirt
> 2015-09-22 03:57:29,905 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-53) [] Could not associate brick
> 'sjcstorage02:/export/vmstore/brick01' of volume
> '878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no
> gluster network found in cluster
> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 03:57:29,905 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-53) [] Could not add brick
> 'sjcvhost02:/export/vmstore/brick01' to volume
> '878a316d-2394-4aae-bdf8-e10eea38225e' - server uuid
> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 03:57:29,905 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-53) [] FINISH,
> GlusterVolumesListVDSCommand, return:
>
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
> log id: 24198fbf
>
>
> I'm new to ovirt and gluster, so any help would be great
>
>
> thanks
>
>
> Brett Stevens
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
>
http://lists.ovirt.org/mailman/listinfo/users
--------------090502070204020205010802
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
+ ovirt-users<br>
<br>
Some clarity on your setup - <br>
<span class="">sjcvhost03 - is this your arbiter node and ovirt
management node? And are you running a compute + storage on the
same nodes - i.e, </span><span class="">sjcstorage01,
</span><span
class="">sjcstorage02, </span><span
class="">sjcvhost03
(arbiter).<br>
<br>
</span><br>
<span class=""> CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587<br>
<br>
- fails with </span><span class="">Error creating a storage
domain's metadata: ("create meta file 'outbox' failed: [Errno 5]
Input/output error",<br>
<br>
Are the vdsm logs you provided from </span><span
class="">sjcvhost03?
There are no errors to be seen in the gluster log you provided.
Could you provide mount log from </span><span
class=""><span
class="">sjcvhost03</span> (at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log most
likely)<br>
If possible, /var/log/glusterfs/* from the 3 storage nodes.<br>
<br>
thanks<br>
sahina<br>
<br>
</span>
<div class="moz-cite-prefix">On 09/23/2015 05:02 AM, Brett Stevens
wrote:<br>
</div>
<blockquote
cite="mid:CAK02sjsh7JXf56xuMSEW_knZcNem9FNsdjEhd3NAQOQiLjeTrA@mail.gmail.com"
type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>as requested here is some logs taken during a domain
create.</div>
<div><br>
</div>
<div>
<p class=""><span class="">2015-09-22
18:46:44,320 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88) [] START,
GlusterVolumesListVDSCommand(HostName = sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id:
2205ff1</span></p>
<p class=""><span class="">2015-09-22
18:46:44,413 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not associate
brick 'sjcstorage01:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22
18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not associate
brick 'sjcstorage02:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22
18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22
18:46:44,418 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88) [] FINISH,
GlusterVolumesListVDSCommand, return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
log id: 2205ff1</span></p>
<p class=""><span class="">2015-09-22
18:46:45,215 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Lock Acquired to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22
18:46:45,230 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Running command:
AddStorageServerConnectionCommand internal: false.
Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction
group CREATE_STORAGE_DOMAIN with role type ADMIN</span></p>
<p class=""><span class="">2015-09-22
18:46:45,233 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [5099cda3] START,
ConnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='null',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null',
nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id:
6a112292</span></p>
<p class=""><span class="">2015-09-22
18:46:48,065 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [5099cda3] FINISH,
ConnectStorageServerVDSCommand, return:
{00000000-0000-0000-0000-000000000000=0}, log id:
6a112292</span></p>
<p class=""><span class="">2015-09-22
18:46:48,073 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Lock freed to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22
18:46:48,188 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Running command:
AddGlusterFsStorageDomainCommand internal: false. Entities
affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
SystemAction group CREATE_STORAGE_DOMAIN with role type
ADMIN</span></p>
<p class=""><span class="">2015-09-22
18:46:48,206 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23) [6410419] START,
ConnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null',
nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id:
38a2b0d</span></p>
<p class=""><span class="">2015-09-22
18:46:48,219 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23) [6410419] FINISH,
ConnectStorageServerVDSCommand, return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id:
38a2b0d</span></p>
<p class=""><span class="">2015-09-22
18:46:48,221 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] START,
CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id:
b9fe587</span></p>
<p class=""><span class="">2015-09-22
18:46:48,744 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [6410419] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM sjcvhost03
command failed: Error creating a storage domain's
metadata: ("create meta file 'outbox' failed: [Errno 5]
Input/output error",)</span></p>
<p class=""><span class="">2015-09-22
18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
return value 'StatusOnlyReturnForXmlRpc
[status=StatusForXmlRpc [code=362, message=Error creating
a storage domain's metadata: ("create meta file 'outbox'
failed: [Errno 5] Input/output error",)]]'</span></p>
<p class=""><span class="">2015-09-22
18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] HostName = sjcvhost03</span></p>
<p class=""><span class="">2015-09-22
18:46:48,745 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] Command
'CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'})' execution failed:
VDSGenericException: VDSErrorException: Failed in
vdscommand to CreateStorageDomainVDS, error = Error
creating a storage domain's metadata: ("create meta file
'outbox' failed: [Errno 5] Input/output
error",)</span></p>
<p class=""><span class="">2015-09-22
18:46:48,745 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] FINISH,
CreateStorageDomainVDSCommand, log id: b9fe587</span></p>
<p class=""><span class="">2015-09-22
18:46:48,745 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
failed: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed in
vdscommand to CreateStorageDomainVDS, error = Error
creating a storage domain's metadata: ("create meta file
'outbox' failed: [Errno 5] Input/output error",) (Failed
with error StorageDomainMetadataCreationError and code
362)</span></p>
<p class=""><span class="">2015-09-22
18:46:48,755 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p class=""><span class="">2015-09-22
18:46:48,758 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p class=""><span class="">2015-09-22
18:46:48,769 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Transaction rolled-back for
command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.</span></p>
<p class=""><span class="">2015-09-22
18:46:48,784 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [6410419] Correlation ID: 6410419, Job
ID: 78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack:
null, Custom Event ID: -1, Message: Failed to add Storage
Domain sjcvmstore. (User: admin@internal)</span></p>
<p class=""><span class="">2015-09-22
18:46:48,996 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Lock Acquired to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22
18:46:49,018 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Running command:
RemoveStorageServerConnectionCommand internal: false.
Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction
group CREATE_STORAGE_DOMAIN with role type ADMIN</span></p>
<p class=""><span class="">2015-09-22
18:46:49,024 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Removing connection
'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from
database </span></p>
<p class=""><span class="">2015-09-22
18:46:49,026 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32) [1635a244] START,
DisconnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null',
nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id:
39d3b568</span></p>
<p class=""><span class="">2015-09-22
18:46:49,248 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32) [1635a244] FINISH,
DisconnectStorageServerVDSCommand, return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id:
39d3b568</span></p>
<p class=""><span class="">2015-09-22
18:46:49,252 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Lock freed to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22
18:46:49,431 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] START,
GlusterVolumesListVDSCommand(HostName = sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id:
17014ae8</span></p>
<p class=""><span class="">2015-09-22
18:46:49,511 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not associate
brick 'sjcstorage01:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22
18:46:49,515 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not associate
brick 'sjcstorage02:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22
18:46:49,516 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22
18:46:49,516 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] FINISH,
GlusterVolumesListVDSCommand, return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
log id: 17014ae8</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">ovirt engine thinks
that
sjcstorage01 is sjcstorage01, its all testbed at the
moment and is all short names, defined in /etc/hosts (all
copied to each server for consistancy)</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">volume info for
vmstore is</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">Status of volume:
vmstore</span></p>
<p class=""><span class="">Gluster process
TCP Port RDMA Port Online Pid</span></p>
<p class=""><span
class="">------------------------------------------------------------------------------</span></p>
<p class=""><span class="">Brick
sjcstorage01:/export/vmstore/brick01 49157 0
Y 7444 </span></p>
<p class=""><span class="">Brick
sjcstorage02:/export/vmstore/brick01 49157 0
Y 4063 </span></p>
<p class=""><span class="">Brick
sjcvhost02:/export/vmstore/brick01 49156 0
Y 3243 </span></p>
<p class=""><span class="">NFS Server on
localhost
2049 0 Y 3268 </span></p>
<p class=""><span class="">Self-heal Daemon on
localhost
N/A N/A Y 3284 </span></p>
<p class=""><span class="">NFS Server on
sjcstorage01
2049 0 Y 7463 </span></p>
<p class=""><span class="">Self-heal Daemon on
sjcstorage01
N/A N/A Y 7472 </span></p>
<p class=""><span class="">NFS Server on
sjcstorage02
2049 0 Y 4082 </span></p>
<p class=""><span class="">Self-heal Daemon on
sjcstorage02
N/A N/A Y 4090 </span></p>
<p class=""><span
class=""> </span></p>
<p class=""><span class="">Task Status of Volume
vmstore</span></p>
<p class=""><span
class="">------------------------------------------------------------------------------</span></p>
<p class="">
</p>
<p class=""><span class="">There are no active
volume tasks</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">vdsm logs from time
the domain is
added</span></p>
<p class=""><span class=""><br>
</span></p>
<p class="">hread-789::DEBUG::2015-09-22
19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from
state init -> state preparing</p>
<p class="">Thread-790::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:07,797::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-790::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:07,797::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished: {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from
state preparing -> state finished</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0 aborting
False</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52510)</p>
<p class="">BindingXMLRPC::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Thread-791::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a>
started</p>
<p class="">Thread-791::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a>
stopped</p>
<p class="">Thread-792::DEBUG::2015-09-22
19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from
state init -> state preparing</p>
<p class="">Thread-793::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:22,832::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-793::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:22,832::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished: {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from
state preparing -> state finished</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0 aborting
False</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52511)</p>
<p class="">BindingXMLRPC::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Thread-794::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a>
started</p>
<p class="">Thread-794::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a>
stopped</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.connectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'00000000-0000-0000-0000-000000000000', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType':
7}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from
state init -> state preparing</p>
<p class="">Thread-795::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,521::logUtils::48::dispatcher::(wrapper) Run and
protect: connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'00000000-0000-0000-0000-000000000000',
u'connection': u'sjcstorage01:/vmstore', u'iqn':
u'',
u'user': u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}],
options=None)</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir)
Creating directory:
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore mode:
None</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n /usr/bin/systemd-run --scope
--slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
backup-volfile-servers=sjcstorage02:sjcvhost02
sjcstorage01:/vmstore
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd
None)</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*</p>
<p class="">Thread-796::DEBUG::2015-09-22
19:12:35,707::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-797::DEBUG::2015-09-22
19:12:35,712::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p class="">Thread-795::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,721::logUtils::51::dispatcher::(wrapper) Run and
protect: connectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished:
{'statuslist': [{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from
state preparing -> state finished</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0 aborting
False</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.connectStorageServer' in bridge with
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.connectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType':
7}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from
state init -> state preparing</p>
<p class="">Thread-798::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,776::logUtils::48::dispatcher::(wrapper) Run and
protect: connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection': u'sjcstorage01:/vmstore', u'iqn':
u'',
u'user': u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}],
options=None)</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p class="">Thread-798::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,782::logUtils::51::dispatcher::(wrapper) Run and
protect: connectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from
state preparing -> state finished</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0 aborting
False</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.connectStorageServer' in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StorageDomain.create' in bridge with {u'name':
u'sjcvmstore01', u'domainType': 7, u'domainClass': 1,
u'typeArgs': u'sjcstorage01:/vmstore', u'version':
u'3',
u'storagedomainID':
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state init -> state preparing</p>
<p class="">Thread-801::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,788::logUtils::48::dispatcher::(wrapper) Run and
protect: createStorageDomain(storageType=7,
sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
domainName=u'sjcvmstore01',
typeSpecificArg=u'sjcstorage01:/vmstore', domClass=1,
domVersion=u'3', options=None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.sdc.refreshStorage)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.iscsi.rescan)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsi::431::Storage.ISCSI::(rescan) Performing
SCSI scan, this will take up to 30 seconds</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.hba.rescan)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::hba::56::Storage.HBA::(rescan) Starting scan</p>
<p class="">Thread-802::DEBUG::2015-09-22
19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::hba::62::Storage.HBA::(rescan) Scan finished</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n /usr/sbin/multipath (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> = ''; <rc> = 0</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,936::utils::661::root::(execCmd) /sbin/udevadm
settle --timeout=5 (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,946::utils::679::root::(execCmd) SUCCESS:
<err> = ''; <rc> = 0</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd)
/usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices {
preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 obtain_device_list_from_udev=0
filter = [ '\''r|.*|'\'' ] } global {
locking_type=1
prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 }
backup { retain_min = 50 retain_days = 0 } ' --noheadings
--units b --nosuffix --separator '|' --ignoreskippedcluster
-o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd) FAILED:
<err> = ' WARNING: lvmetad is running but disabled.
Restart lvmetad before enabling it!\n Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found\n Cannot
process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5</p>
<p class="">Thread-801::WARNING::2015-09-22
19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs
failed: 5 [] [' WARNING: lvmetad is running but disabled.
Restart lvmetad before enabling it!', ' Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found', '
Cannot
process volume group c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload operation' released the operation
mutex</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found</p>
<p class="">Traceback (most recent call last):</p>
<p class=""> File "/usr/share/vdsm/storage/sdc.py",
line 142,
in _findDomain</p>
<p class=""> dom = findMethod(sdUUID)</p>
<p class=""> File "/usr/share/vdsm/storage/sdc.py",
line 172,
in _findUnfetchedDomain</p>
<p class=""> raise
se.StorageDomainDoesNotExist(sdUUID)</p>
<p class="">StorageDomainDoesNotExist: Storage domain does not
exist: (u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)</p>
<p class="">Thread-801::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
domainName=sjcvmstore01 remotePath=sjcstorage01:/vmstore
domClass=1</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,015::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected
error</p>
<p class="">Traceback (most recent call last):</p>
<p class=""> File "/usr/share/vdsm/storage/task.py",
line
873, in _run</p>
<p class=""> return fn(*args, **kargs)</p>
<p class=""> File "/usr/share/vdsm/logUtils.py", line
49, in
wrapper</p>
<p class=""> res = f(*args, **kwargs)</p>
<p class=""> File "/usr/share/vdsm/storage/hsm.py",
line
2697, in createStorageDomain</p>
<p class=""> domVersion)</p>
<p class=""> File "/usr/share/vdsm/storage/nfsSD.py",
line
84, in create</p>
<p class=""> remotePath, storageType, version)</p>
<p class=""> File
"/usr/share/vdsm/storage/fileSD.py", line
264, in _prepareMetadata</p>
<p class=""> "create meta file '%s' failed:
%s" %
(metaFile, str(e)))</p>
<p class="">StorageDomainMetadataCreationError: Error creating
a storage domain's metadata: ("create meta file 'outbox'
failed: [Errno 5] Input/output error",)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3', u'sjcvmstore01',
u'sjcstorage01:/vmstore', 1, u'3') {} failed - stopping
task</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping in
state preparing (force False)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1 aborting
True</p>
<p class="">Thread-801::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting: Task
is aborted: "Error creating a storage domain's metadata" -
code 362</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare:
aborted: Error creating a storage domain's metadata</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0 aborting
True</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort:
force False</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state preparing -> state aborting</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting:
recover policy none</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state aborting -> state failed</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': 'Error creating a storage
domain\'s
metadata: ("create meta file \'outbox\' failed: [Errno 5]
Input/output error",)', 'code': 362}}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.disconnectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType':
7}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from
state init -> state preparing</p>
<p class="">Thread-807::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,182::logUtils::48::dispatcher::(wrapper) Run and
protect: disconnectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection': u'sjcstorage01:/vmstore', u'iqn':
u'',
u'user': u'', u'tpgt': u'1',
u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}],
options=None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n /usr/bin/umount -f -l
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd
None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.sdc.refreshStorage)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.iscsi.rescan)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsi::431::Storage.ISCSI::(rescan) Performing
SCSI scan, this will take up to 30 seconds</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.hba.rescan)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::hba::56::Storage.HBA::(rescan) Starting scan</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::hba::62::Storage.HBA::(rescan) Scan finished</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n /usr/sbin/multipath (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> = ''; <rc> = 0</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,374::utils::661::root::(execCmd) /sbin/udevadm
settle --timeout=5 (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,383::utils::679::root::(execCmd) SUCCESS:
<err> = ''; <rc> = 0</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,386::logUtils::51::dispatcher::(wrapper) Run and
protect: disconnectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from
state preparing -> state finished</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0 aborting
False</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.disconnectStorageServer' in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from
state init -> state preparing</p>
<p class="">Thread-808::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:37,868::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-808::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:37,868::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished: {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from
state preparing -> state finished</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0 aborting
False</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52512)</p>
<p class="">BindingXMLRPC::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Thread-809::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a>
started</p>
<p class="">Thread-809::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a>
stopped</p>
<p class="">Thread-810::DEBUG::2015-09-22
19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from
state init -> state preparing</p>
<p class="">Thread-811::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:52,902::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-811::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:52,902::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished: {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from
state preparing -> state finished</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0 aborting
False</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52513)</p>
<p class="">BindingXMLRPC::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Thread-812::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a>
started</p>
<p class="">Thread-812::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a>
stopped</p>
<p class="">Thread-813::DEBUG::2015-09-22
19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from
state init -> state preparing</p>
<p class="">Thread-814::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:07,935::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-814::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:07,935::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished: {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from
state preparing -> state finished</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0 aborting
False</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52515)</p>
<p class="">BindingXMLRPC::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Thread-815::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a>
started</p>
<p class="">Thread-815::<a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a>
stopped</p>
<p class=""><span
class=""></span></p>
<p class="">Thread-816::DEBUG::2015-09-22
19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
</div>
<div><br>
</div>
<div><br>
</div>
<div>gluster logs</div>
<div><br>
</div>
<div>
<p class=""><span
class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class=""> 1: volume
vmstore-client-0</span></p>
<p class=""><span class=""> 2: type
protocol/client</span></p>
<p class=""><span class=""> 3: option
ping-timeout 42</span></p>
<p class=""><span class=""> 4: option
remote-host
sjcstorage01</span></p>
<p class=""><span class=""> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 6: option
transport-type
socket</span></p>
<p class=""><span class=""> 7: option
send-gids true</span></p>
<p class=""><span class=""> 8:
end-volume</span></p>
<p class=""><span class=""> 9:
</span></p>
<p class=""><span class=""> 10: volume
vmstore-client-1</span></p>
<p class=""><span class=""> 11: type
protocol/client</span></p>
<p class=""><span class=""> 12: option
ping-timeout 42</span></p>
<p class=""><span class=""> 13: option
remote-host
sjcstorage02</span></p>
<p class=""><span class=""> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 15: option
transport-type
socket</span></p>
<p class=""><span class=""> 16: option
send-gids true</span></p>
<p class=""><span class=""> 17:
end-volume</span></p>
<p class=""><span class=""> 18:
</span></p>
<p class=""><span class=""> 19: volume
vmstore-client-2</span></p>
<p class=""><span class=""> 20: type
protocol/client</span></p>
<p class=""><span class=""> 21: option
ping-timeout 42</span></p>
<p class=""><span class=""> 22: option
remote-host
sjcvhost02</span></p>
<p class=""><span class=""> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 24: option
transport-type
socket</span></p>
<p class=""><span class=""> 25: option
send-gids true</span></p>
<p class=""><span class=""> 26:
end-volume</span></p>
<p class=""><span class=""> 27:
</span></p>
<p class=""><span class=""> 28: volume
vmstore-replicate-0</span></p>
<p class=""><span class=""> 29: type
cluster/replicate</span></p>
<p class=""><span class=""> 30: option
arbiter-count 1</span></p>
<p class=""><span class=""> 31: subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class=""> 32:
end-volume</span></p>
<p class=""><span class=""> 33:
</span></p>
<p class=""><span class=""> 34: volume
vmstore-dht</span></p>
<p class=""><span class=""> 35: type
cluster/distribute</span></p>
<p class=""><span class=""> 36: subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class=""> 37:
end-volume</span></p>
<p class=""><span class=""> 38:
</span></p>
<p class=""><span class=""> 39: volume
vmstore-write-behind</span></p>
<p class=""><span class=""> 40: type
performance/write-behind</span></p>
<p class=""><span class=""> 41: subvolumes
vmstore-dht</span></p>
<p class=""><span class=""> 42:
end-volume</span></p>
<p class=""><span class=""> 43:
</span></p>
<p class=""><span class=""> 44: volume
vmstore-read-ahead</span></p>
<p class=""><span class=""> 45: type
performance/read-ahead</span></p>
<p class=""><span class=""> 46: subvolumes
vmstore-write-behind</span></p>
<p class=""><span class=""> 47:
end-volume</span></p>
<p class=""><span class=""> 48:
</span></p>
<p class=""><span class=""> 49: volume
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 50: type
performance/readdir-ahead</span></p>
<p class=""><span class="">52:
end-volume</span></p>
<p class=""><span class=""> 53:
</span></p>
<p class=""><span class=""> 54: volume
vmstore-io-cache</span></p>
<p class=""><span class=""> 55: type
performance/io-cache</span></p>
<p class=""><span class=""> 56: subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 57:
end-volume</span></p>
<p class=""><span class=""> 58:
</span></p>
<p class=""><span class=""> 59: volume
vmstore-quick-read</span></p>
<p class=""><span class=""> 60: type
performance/quick-read</span></p>
<p class=""><span class=""> 61: subvolumes
vmstore-io-cache</span></p>
<p class=""><span class=""> 62:
end-volume</span></p>
<p class=""><span class=""> 63:
</span></p>
<p class=""><span class=""> 64: volume
vmstore-open-behind</span></p>
<p class=""><span class=""> 65: type
performance/open-behind</span></p>
<p class=""><span class=""> 66: subvolumes
vmstore-quick-read</span></p>
<p class=""><span class=""> 67:
end-volume</span></p>
<p class=""><span class=""> 68:
</span></p>
<p class=""><span class=""> 69: volume
vmstore-md-cache</span></p>
<p class=""><span class=""> 70: type
performance/md-cache</span></p>
<p class=""><span class=""> 71: subvolumes
vmstore-open-behind</span></p>
<p class=""><span class=""> 72:
end-volume</span></p>
<p class=""><span class=""> 73:
</span></p>
<p class=""><span class=""> 74: volume
vmstore</span></p>
<p class=""><span class=""> 75: type
debug/io-stats</span></p>
<p class=""><span class=""> 76: option
latency-measurement
off</span></p>
<p class=""><span class=""> 77: option
count-fop-hits off</span></p>
<p class=""><span class=""> 78: subvolumes
vmstore-md-cache</span></p>
<p class=""><span class=""> 79:
end-volume</span></p>
<p class=""><span class=""> 80:
</span></p>
<p class=""><span class=""> 81: volume
meta-autoload</span></p>
<p class=""><span class=""> 82: type
meta</span></p>
<p class=""><span class=""> 83: subvolumes
vmstore</span></p>
<p class=""><span class=""> 84:
end-volume</span></p>
<p class=""><span class=""> 85:
</span></p>
<p class=""><span
class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.586205] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.586325] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.586480] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.595052] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.595397] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.595576] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.595721] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume
'/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.595738] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.596044] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-0' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.596170] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume
'/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume :</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume
'/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.596506] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.608758] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.608910] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.608936] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.608950] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.609695] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.609868] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel
7.22</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.616577] I
[MSGID: 109063] [dht-layout.c:702:dht_layout_normalize]
0-vmstore-dht: Found anomalies in / (gfid =
00000000-0000-0000-0000-000000000001). Holes=1
overlaps=0</span></p>
<p class=""><span class="">[2015-09-22
05:29:07.620230] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of / with [Subvol_name:
vmstore-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295
, Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22
05:29:08.122415] W
[fuse-bridge.c:1230:fuse_err_cbk] 0-glusterfs-fuse: 26:
REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data
available)</span></p>
<p class=""><span class="">[2015-09-22
05:29:08.137359] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2 with [Subvol_name:
vmstore-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295
, Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22
05:29:08.145835] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
[Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 ,
Stop: 4294967295 , Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22
05:30:57.897819] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.909889] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.923087] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.925701] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.927984] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">Final
graph:</span></p>
<p class=""><span
class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class=""> 1: volume
vmstore-client-0</span></p>
<p class=""><span class=""> 2: type
protocol/client</span></p>
<p class=""><span class=""> 3: option
ping-timeout 42</span></p>
<p class=""><span class=""> 4: option
remote-host
sjcstorage01</span></p>
<p class=""><span class=""> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 6: option
transport-type
socket</span></p>
<p class=""><span class=""> 7: option
send-gids true</span></p>
<p class=""><span class=""> 8:
end-volume</span></p>
<p class=""><span class=""> 9:
</span></p>
<p class=""><span class=""> 10: volume
vmstore-client-1</span></p>
<p class=""><span class=""> 11: type
protocol/client</span></p>
<p class=""><span class=""> 12: option
ping-timeout 42</span></p>
<p class=""><span class=""> 13: option
remote-host
sjcstorage02</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class=""> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 15: option
transport-type
socket</span></p>
<p class=""><span class=""> 16: option
send-gids true</span></p>
<p class=""><span class=""> 17:
end-volume</span></p>
<p class=""><span class=""> 18:
</span></p>
<p class=""><span class=""> 19: volume
vmstore-client-2</span></p>
<p class=""><span class=""> 20: type
protocol/client</span></p>
<p class=""><span class=""> 21: option
ping-timeout 42</span></p>
<p class=""><span class=""> 22: option
remote-host
sjcvhost02</span></p>
<p class=""><span class=""> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 24: option
transport-type
socket</span></p>
<p class=""><span class=""> 25: option
send-gids true</span></p>
<p class=""><span class=""> 26:
end-volume</span></p>
<p class=""><span class=""> 27:
</span></p>
<p class=""><span class=""> 28: volume
vmstore-replicate-0</span></p>
<p class=""><span class=""> 29: type
cluster/replicate</span></p>
<p class=""><span class=""> 30: option
arbiter-count 1</span></p>
<p class=""><span class=""> 31: subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class=""> 32:
end-volume</span></p>
<p class=""><span class=""> 33:
</span></p>
<p class=""><span class=""> 34: volume
vmstore-dht</span></p>
<p class=""><span class=""> 35: type
cluster/distribute</span></p>
<p class=""><span class=""> 36: subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class=""> 37:
end-volume</span></p>
<p class=""><span class=""> 38:
</span></p>
<p class=""><span class=""> 39: volume
vmstore-write-behind</span></p>
<p class=""><span class=""> 40: type
performance/write-behind</span></p>
<p class=""><span class=""> 41: subvolumes
vmstore-dht</span></p>
<p class=""><span class=""> 42:
end-volume</span></p>
<p class=""><span class=""> 43:
</span></p>
<p class=""><span class=""> 44: volume
vmstore-read-ahead</span></p>
<p class=""><span class=""> 45: type
performance/read-ahead</span></p>
<p class=""><span class=""> 46: subvolumes
vmstore-write-behind</span></p>
<p class=""><span class=""> 47:
end-volume</span></p>
<p class=""><span class=""> 48:
</span></p>
<p class=""><span class=""> 49: volume
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 50: type
performance/readdir-ahead</span></p>
<p class=""><span class=""> 51: subvolumes
vmstore-read-ahead</span></p>
<p class=""><span class=""> 52:
end-volume</span></p>
<p class=""><span class=""> 53:
</span></p>
<p class=""><span class=""> 54: volume
vmstore-io-cache</span></p>
<p class=""><span class=""> 55: type
performance/io-cache</span></p>
<p class=""><span class=""> 56: subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 57:
end-volume</span></p>
<p class=""><span class=""> 58:
</span></p>
<p class=""><span class=""> 59: volume
vmstore-quick-read</span></p>
<p class=""><span class=""> 60: type
performance/quick-read</span></p>
<p class=""><span class=""> 61: subvolumes
vmstore-io-cache</span></p>
<p class=""><span class=""> 62:
end-volume</span></p>
<p class=""><span class=""> 63:
</span></p>
<p class=""><span class=""> 64: volume
vmstore-open-behind</span></p>
<p class=""><span class=""> 65: type
performance/open-behind</span></p>
<p class=""><span class=""> 66: subvolumes
vmstore-quick-read</span></p>
<p class=""><span class=""> 67:
end-volume</span></p>
<p class=""><span class=""> 68:
</span></p>
<p class=""><span class=""> 69: volume
vmstore-md-cache</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class=""> 70: type
performance/md-cache</span></p>
<p class=""><span class=""> 71: subvolumes
vmstore-open-behind</span></p>
<p class=""><span class=""> 72:
end-volume</span></p>
<p class=""><span class=""> 73:
</span></p>
<p class=""><span class=""> 74: volume
vmstore</span></p>
<p class=""><span class=""> 75: type
debug/io-stats</span></p>
<p class=""><span class=""> 76: option
latency-measurement
off</span></p>
<p class=""><span class=""> 77: option
count-fop-hits off</span></p>
<p class=""><span class=""> 78: subvolumes
vmstore-md-cache</span></p>
<p class=""><span class=""> 79:
end-volume</span></p>
<p class=""><span class=""> 80:
</span></p>
<p class=""><span class=""> 81: volume
meta-autoload</span></p>
<p class=""><span class=""> 82: type
meta</span></p>
<p class=""><span class=""> 83: subvolumes
vmstore</span></p>
<p class=""><span class=""> 84:
end-volume</span></p>
<p class=""><span class=""> 85:
</span></p>
<p class=""><span
class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.934021] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.934145] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.934491] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.942198] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.942545] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.942659] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.942797] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume
'/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.942808] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.943036] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-0' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.943078] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume
'/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.943086] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.943292] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume
'/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.943302] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.953887] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.954071] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.954105] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.954124] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.955282] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel
7.22</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.955738] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.970232] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.970834] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f187139fdf5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f1872a09785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f1872a09609] ) 0-: received signum (15), shutting
down</span></p>
<p class=""><span class="">[2015-09-22
05:30:57.970848] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22
05:30:58.420973] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22
05:30:58.421355] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f8267cd4df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f826933e785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f826933e609] ) 0-: received signum (15), shutting
down</span></p>
<p class=""><span class="">[2015-09-22
05:30:58.421369] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.534410] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.545686] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.553019] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class="">
</p>
<p class=""><span class="">[2015-09-22
05:31:09.555552] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.557989] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">Final
graph:</span></p>
<p class=""><span
class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class=""> 1: volume
vmstore-client-0</span></p>
<p class=""><span class=""> 2: type
protocol/client</span></p>
<p class=""><span class=""> 3: option
ping-timeout 42</span></p>
<p class=""><span class=""> 4: option
remote-host
sjcstorage01</span></p>
<p class=""><span class=""> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 6: option
transport-type
socket</span></p>
<p class=""><span class=""> 7: option
send-gids true</span></p>
<p class=""><span class=""> 8:
end-volume</span></p>
<p class=""><span class=""> 9:
</span></p>
<p class=""><span class=""> 10: volume
vmstore-client-1</span></p>
<p class=""><span class=""> 11: type
protocol/client</span></p>
<p class=""><span class=""> 12: option
ping-timeout 42</span></p>
<p class=""><span class=""> 13: option
remote-host
sjcstorage02</span></p>
<p class=""><span class=""> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 15: option
transport-type
socket</span></p>
<p class=""><span class=""> 16: option
send-gids true</span></p>
<p class=""><span class=""> 17:
end-volume</span></p>
<p class=""><span class=""> 18:
</span></p>
<p class=""><span class=""> 19: volume
vmstore-client-2</span></p>
<p class=""><span class=""> 20: type
protocol/client</span></p>
<p class=""><span class=""> 21: option
ping-timeout 42</span></p>
<p class=""><span class=""> 22: option
remote-host
sjcvhost02</span></p>
<p class=""><span class=""> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class=""> 24: option
transport-type
socket</span></p>
<p class=""><span class=""> 25: option
send-gids true</span></p>
<p class=""><span class=""> 26:
end-volume</span></p>
<p class=""><span class=""> 27:
</span></p>
<p class=""><span class=""> 28: volume
vmstore-replicate-0</span></p>
<p class=""><span class=""> 29: type
cluster/replicate</span></p>
<p class=""><span class=""> 30: option
arbiter-count 1</span></p>
<p class=""><span class=""> 31: subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class=""> 32:
end-volume</span></p>
<p class=""><span class=""> 33:
</span></p>
<p class=""><span class=""> 34: volume
vmstore-dht</span></p>
<p class=""><span class=""> 35: type
cluster/distribute</span></p>
<p class=""><span class=""> 36: subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class=""> 37:
end-volume</span></p>
<p class=""><span class=""> 38:
</span></p>
<p class=""><span class=""> 39: volume
vmstore-write-behind</span></p>
<p class=""><span class=""> 40: type
performance/write-behind</span></p>
<p class=""><span class=""> 41: subvolumes
vmstore-dht</span></p>
<p class=""><span class=""> 42:
end-volume</span></p>
<p class=""><span class=""> 43:
</span></p>
<p class=""><span class=""> 44: volume
vmstore-read-ahead</span></p>
<p class=""><span class=""> 45: type
performance/read-ahead</span></p>
<p class=""><span class=""> 46: subvolumes
vmstore-write-behind</span></p>
<p class=""><span class=""> 47:
end-volume</span></p>
<p class=""><span class=""> 48:
</span></p>
<p class=""><span class=""> 49: volume
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 50: type
performance/readdir-ahead</span></p>
<p class=""><span class=""> 51: subvolumes
vmstore-read-ahead</span></p>
<p class="">
</p>
<p class=""><span class=""> 52:
end-volume</span></p>
<p class=""><span class=""> 53:
</span></p>
<p class=""><span class=""> 54: volume
vmstore-io-cache</span></p>
<p class=""><span class=""> 55: type
performance/io-cache</span></p>
<p class=""><span class=""> 56: subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class=""> 57:
end-volume</span></p>
<p class=""><span class=""> 58:
</span></p>
<p class=""><span class=""> 59: volume
vmstore-quick-read</span></p>
<p class=""><span class=""> 60: type
performance/quick-read</span></p>
<p class=""><span class=""> 61: subvolumes
vmstore-io-cache</span></p>
<p class=""><span class=""> 62:
end-volume</span></p>
<p class=""><span class=""> 63:
</span></p>
<p class=""><span class=""> 64: volume
vmstore-open-behind</span></p>
<p class=""><span class=""> 65: type
performance/open-behind</span></p>
<p class=""><span class=""> 66: subvolumes
vmstore-quick-read</span></p>
<p class=""><span class=""> 67:
end-volume</span></p>
<p class=""><span class=""> 68:
</span></p>
<p class=""><span class=""> 69: volume
vmstore-md-cache</span></p>
<p class=""><span class=""> 70: type
performance/md-cache</span></p>
<p class=""><span class=""> 71: subvolumes
vmstore-open-behind</span></p>
<p class=""><span class=""> 72:
end-volume</span></p>
<p class=""><span class=""> 73:
</span></p>
<p class=""><span class=""> 74: volume
vmstore</span></p>
<p class=""><span class=""> 75: type
debug/io-stats</span></p>
<p class=""><span class=""> 76: option
latency-measurement
off</span></p>
<p class=""><span class=""> 77: option
count-fop-hits off</span></p>
<p class=""><span class=""> 78: subvolumes
vmstore-md-cache</span></p>
<p class=""><span class=""> 79:
end-volume</span></p>
<p class=""><span class=""> 80:
</span></p>
<p class=""><span class=""> 81: volume
meta-autoload</span></p>
<p class=""><span class=""> 82: type
meta</span></p>
<p class=""><span class=""> 83: subvolumes
vmstore</span></p>
<p class=""><span class=""> 84:
end-volume</span></p>
<p class=""><span class=""> 85:
</span></p>
<p class=""><span
class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.563262] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.563431] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.563877] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.572443] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.572599] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.572742] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.573165] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume
'/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.573186] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.573395] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-1' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.573427] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume
'/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.573435] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.573754] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume
'/export/vmstore/brick01'.</span></p>
<p class="">
</p>
<p class=""><span class="">[2015-09-22
05:31:09.573783] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopen:</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.577192] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.577302] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.577325] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.577339] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.578125] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel
7.22</span></p>
<p class=""><span class="">[2015-09-22
05:31:09.578636] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22
05:31:10.073698] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22
05:31:10.073977] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f6b9ba88df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f6b9d0f2785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f6b9d0f2609] ) 0-: received signum (15), shutting
down</span></p>
<p class=""><span class="">[2015-09-22
05:31:10.073993] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22
05:31:20.184700] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22
05:31:20.194928] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22
05:31:20.200701] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22
05:31:20.203110] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22
05:31:20.205708] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class="">Final
graph:</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">Hope this
helps. </span></p>
<p class=""><span class=""><br>
</span></p>
<p class="">thanks again</p>
<p class=""><br>
</p>
<p class="">Brett Stevens</p>
<p class=""><br>
</p>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 22, 2015 at 10:14 PM,
Sahina Bose <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:sabose@redhat.com"
target="_blank">sabose(a)redhat.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><span
class=""> <br>
<br>
<div>On 09/22/2015 02:17 PM, Brett Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi. First time on the lists. I've
searched for this but no luck so sorry if this has
been covered before.
<div><br>
</div>
<div>Im working with the latest 3.6 beta with the
following infrastructure. </div>
<div><br>
</div>
<div>1 management host (to be used for a number of
tasks so chose not to use self hosted, we are a
school and will need to keep an eye on hardware
costs)</div>
<div>2 compute nodes</div>
<div>2 gluster nodes</div>
<div><br>
</div>
<div>so far built one gluster volume using the
gluster cli to give me 2 nodes and one arbiter
node (management host)</div>
<div><br>
</div>
<div>so far, every time I create a volume, it shows
up strait away on the ovirt gui. however no matter
what I try, I cannot create or import it as a data
domain. </div>
<div><br>
</div>
<div>the current error in the ovirt gui is "Error
while executing action AddGlusterFsStorageDomain:
Error creating a storage domain's metadata"</div>
</div>
</blockquote>
<br>
</span> Please provide vdsm and gluster logs<span
class=""><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>logs, continuously rolling the following errors
around</div>
<div>
<p><span>Scheduler_Worker-53) [] START,
GlusterVolumesListVDSCommand(HostName =
sjcstorage02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id: 24198fbf</span></p>
<p><span>2015-09-22 03:57:29,903 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not associate brick
'sjcstorage01:/export/vmstore/brick01' of
volume '878a316d-2394-4aae-bdf8-e10eea38225e'
with correct network as no gluster network
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
</div>
</div>
</blockquote>
<br>
</span> What is the hostname provided in ovirt engine for
<span>sjcstorage01 ? Does this host have multiple nics?<br>
<br>
Could you provide output of gluster volume info?<br>
Please note, that these errors are not related to error
in creating storage domain. However, these errors could
prevent you from monitoring the state of gluster volume
from oVirt<br>
<br>
</span>
<blockquote type="cite"><span class="">
<div dir="ltr">
<div>
<p><span>2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not associate brick
'sjcstorage02:/export/vmstore/brick01' of
volume '878a316d-2394-4aae-bdf8-e10eea38225e'
with correct network as no gluster network
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'878a316d-2394-4aae-bdf8-e10eea38225e' -
server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22 03:57:29,905 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-53) [] FINISH,
GlusterVolumesListVDSCommand, return:
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
log id: 24198fbf</span></p>
<p><span><br>
</span></p>
<p><span>I'm new to ovirt and gluster, so any help
would be great</span></p>
<p><span><br>
</span></p>
<p><span>thanks</span></p>
<p><span><br>
</span></p>
<p><span>Brett Stevens</span></p>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</span><span class="">
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org"
target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
target="_blank">http://lists.ovirt.org/mailman/listinfo/user...
</pre>
</span></blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>
--------------090502070204020205010802--