This is a multi-part message in MIME format.
--------------070407060509090702050906
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi Brett,
Can you truncate the gluster brick and mount logs on all three nodes,
try creating the storage domain again and then share these logs along
with the VDSM logs?
i.e. on all 3 nodes,
1. echo >
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log
2. echo > export-vmstore-brick01.log
3. Create the storage domain (at which point VDSM supposedly fails with
the truncate error)
4. Share the logs.
Also, what timezone are you in? That would be needed to co-relate the
timestamps in the vdsm log (local time) and gluster log (UTC)
Thanks!
Ravi
-------- Forwarded Message --------
Subject: Re: [ovirt-users] adding gluster domains
Date: Tue, 29 Sep 2015 08:38:49 +1000
From: Brett Stevens <gorttman(a)i3sec.com.au>
Reply-To: brett(a)i3sec.com.au
To: Sahina Bose <sabose(a)redhat.com>
Sorry about the delay, I've run the truncate. I'm not sure what
results you were expecting, but it executed fine, no delays no errors
no problems.
thanks
Brett Stevens
On Thu, Sep 24, 2015 at 7:29 PM, Brett Stevens <gorttman(a)i3sec.com.au
<mailto:gorttman@i3sec.com.au>> wrote:
Thanks I'll do that tomorrow morning.
Just out of interest, I keep getting warn errors in the engine.log
allong the lines of node not present (sjcvhost02 which is the
arbiter) and no gluster network present even after I have added
the gluster network option in the network management gui.
thanks
Brett Stevens
On Thu, Sep 24, 2015 at 7:26 PM, Sahina Bose <sabose(a)redhat.com>
wrote:
Sorry, I intended to forward it to a gluster devel.
Btw, there were no errors in the mount log - so unable to root
cause why truncate of file failed with IO error. Was the log
from vhost03 -
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log
?
We will look into the logs you attached to see if there are
any errors reported at the bricks. (But there should have been
some error in mount log!)
Could you also try "truncate -s 10M test" from the mount point
( manually mount gluster using - #mount -t glusterfs
sjcstorage01:/vmstore <mountpoint>) and report results.
On 09/24/2015 02:32 PM, Brett Stevens wrote:
> Hi Sahina.
>
> Something has gone wrong with your last email. I have
> received a message from you, but did not get any text to go
> with it. could you resend please?
>
> thanks
>
>
> On Thu, Sep 24, 2015 at 6:48 PM, Sahina Bose
> <sabose(a)redhat.com> wrote:
>
>
>
> On 09/24/2015 04:21 AM, Brett Stevens wrote:
>> Hi Sahina.
>>
>> vhost02 is the engine node vhost03 is the hypervisor
>> storage01 and 02 the gluster nodes. I've put arbiter on
>> vhost02
>>
>> all tasks are separated (except engine and arbiter)
>>
>> thanks
>>
>>
>> On Wed, Sep 23, 2015 at 9:48 PM, Sahina Bose
>> <sabose(a)redhat.com> wrote:
>>
>> + ovirt-users
>>
>> Some clarity on your setup -
>> sjcvhost03 - is this your arbiter node and ovirt
>> management node? And are you running a compute +
>> storage on the same nodes - i.e, sjcstorage01,
>> sjcstorage02, sjcvhost03 (arbiter).
>>
>>
>> CreateStorageDomainVDSCommand(HostName = sjcvhost03,
>>
CreateStorageDomainVDSCommandParameters:{runAsync='true',
>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>
storageDomain='StorageDomainStatic:{name='sjcvmstore',
>> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
>> args='sjcstorage01:/vmstore'}), log id: b9fe587
>>
>> - fails with Error creating a storage domain's
>> metadata: ("create meta file 'outbox' failed:
[Errno
>> 5] Input/output error",
>>
>> Are the vdsm logs you provided from sjcvhost03?
>> There are no errors to be seen in the gluster log
>> you provided. Could you provide mount log from
>> sjcvhost03 (at
>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log
>> most likely)
>> If possible, /var/log/glusterfs/* from the 3 storage
>> nodes.
>>
>> thanks
>> sahina
>>
>> On 09/23/2015 05:02 AM, Brett Stevens wrote:
>>> Hi Sahina,
>>>
>>> as requested here is some logs taken during a
>>> domain create.
>>>
>>> 2015-09-22 18:46:44,320 INFO
>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>> (DefaultQuartzScheduler_Worker-88) [] START,
>>> GlusterVolumesListVDSCommand(HostName =
>>> sjcstorage01,
>>> GlusterVolumesListVDSParameters:{runAsync='true',
>>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
>>> log id: 2205ff1
>>>
>>> 2015-09-22 18:46:44,413 WARN
>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>> (DefaultQuartzScheduler_Worker-88) [] Could not
>>> associate brick
>>> 'sjcstorage01:/export/vmstore/brick01' of volume
>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>> network as no gluster network found in cluster
>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>
>>> 2015-09-22 18:46:44,417 WARN
>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>> (DefaultQuartzScheduler_Worker-88) [] Could not
>>> associate brick
>>> 'sjcstorage02:/export/vmstore/brick01' of volume
>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>> network as no gluster network found in cluster
>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>
>>> 2015-09-22 18:46:44,417 WARN
>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>> (DefaultQuartzScheduler_Worker-88) [] Could not add
>>> brick 'sjcvhost02:/export/vmstore/brick01' to
>>> volume '030f270a-0999-4df4-9b14-ae56eb0a2fb9' -
>>> server uuid '29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
>>> not found in cluster
>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>
>>> 2015-09-22 18:46:44,418 INFO
>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>> (DefaultQuartzScheduler_Worker-88) [] FINISH,
>>> GlusterVolumesListVDSCommand, return:
>>>
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
>>> log id: 2205ff1
>>>
>>> 2015-09-22 18:46:45,215 INFO
>>>
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>>> (default task-24) [5099cda3] Lock Acquired to
>>> object
>>>
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>> sharedLocks='null'}'
>>>
>>> 2015-09-22 18:46:45,230 INFO
>>>
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>>> (default task-24) [5099cda3] Running command:
>>> AddStorageServerConnectionCommand internal: false.
>>> Entities affected : ID:
>>> aaa00000-0000-0000-0000-123456789aaa Type:
>>> SystemAction group CREATE_STORAGE_DOMAIN with role
>>> type ADMIN
>>>
>>> 2015-09-22 18:46:45,233 INFO
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>> (default task-24) [5099cda3] START,
>>> ConnectStorageServerVDSCommand(HostName =
>>> sjcvhost03,
>>>
StorageServerConnectionManagementVDSParameters:{runAsync='true',
>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>
storagePoolId='00000000-0000-0000-0000-000000000000',
>>> storageType='GLUSTERFS',
>>>
connectionList='[StorageServerConnections:{id='null',
>>> connection='sjcstorage01:/vmstore',
iqn='null',
>>> vfsType='glusterfs', mountOptions='null',
>>> nfsVersion='null', nfsRetrans='null',
>>> nfsTimeo='null', iface='null',
>>> netIfaceName='null'}]'}), log id: 6a112292
>>>
>>> 2015-09-22 18:46:48,065 INFO
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>> (default task-24) [5099cda3] FINISH,
>>> ConnectStorageServerVDSCommand, return:
>>> {00000000-0000-0000-0000-000000000000=0}, log id:
>>> 6a112292
>>>
>>> 2015-09-22 18:46:48,073 INFO
>>>
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>>> (default task-24) [5099cda3] Lock freed to object
>>>
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>> sharedLocks='null'}'
>>>
>>> 2015-09-22 18:46:48,188 INFO
>>>
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>> (default task-23) [6410419] Running command:
>>> AddGlusterFsStorageDomainCommand internal: false.
>>> Entities affected : ID:
>>> aaa00000-0000-0000-0000-123456789aaa Type:
>>> SystemAction group CREATE_STORAGE_DOMAIN with role
>>> type ADMIN
>>>
>>> 2015-09-22 18:46:48,206 INFO
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>> (default task-23) [6410419] START,
>>> ConnectStorageServerVDSCommand(HostName =
>>> sjcvhost03,
>>>
StorageServerConnectionManagementVDSParameters:{runAsync='true',
>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>
storagePoolId='00000000-0000-0000-0000-000000000000',
>>> storageType='GLUSTERFS',
>>>
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
>>> connection='sjcstorage01:/vmstore',
iqn='null',
>>> vfsType='glusterfs', mountOptions='null',
>>> nfsVersion='null', nfsRetrans='null',
>>> nfsTimeo='null', iface='null',
>>> netIfaceName='null'}]'}), log id: 38a2b0d
>>>
>>> 2015-09-22 18:46:48,219 INFO
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>> (default task-23) [6410419] FINISH,
>>> ConnectStorageServerVDSCommand, return:
>>> {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id:
>>> 38a2b0d
>>>
>>> 2015-09-22 18:46:48,221 INFO
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>> (default task-23) [6410419] START,
>>> CreateStorageDomainVDSCommand(HostName =
>>> sjcvhost03,
>>>
CreateStorageDomainVDSCommandParameters:{runAsync='true',
>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>
storageDomain='StorageDomainStatic:{name='sjcvmstore',
>>> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
>>> args='sjcstorage01:/vmstore'}), log id: b9fe587
>>>
>>> 2015-09-22 18:46:48,744 ERROR
>>>
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (default task-23) [6410419] Correlation ID: null,
>>> Call Stack: null, Custom Event ID: -1, Message:
>>> VDSM sjcvhost03 command failed: Error creating a
>>> storage domain's metadata: ("create meta file
>>> 'outbox' failed: [Errno 5] Input/output
error",)
>>>
>>> 2015-09-22 18:46:48,744 INFO
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>> (default task-23) [6410419] Command
>>>
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
>>> return value 'StatusOnlyReturnForXmlRpc
>>> [status=StatusForXmlRpc [code=362, message=Error
>>> creating a storage domain's metadata: ("create meta
>>> file 'outbox' failed: [Errno 5] Input/output
>>> error",)]]'
>>>
>>> 2015-09-22 18:46:48,744 INFO
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>> (default task-23) [6410419] HostName = sjcvhost03
>>>
>>> 2015-09-22 18:46:48,745 ERROR
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>> (default task-23) [6410419] Command
>>> 'CreateStorageDomainVDSCommand(HostName =
>>> sjcvhost03,
>>>
CreateStorageDomainVDSCommandParameters:{runAsync='true',
>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>
storageDomain='StorageDomainStatic:{name='sjcvmstore',
>>> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
>>> args='sjcstorage01:/vmstore'})' execution
failed:
>>> VDSGenericException: VDSErrorException: Failed in
>>> vdscommand to CreateStorageDomainVDS, error = Error
>>> creating a storage domain's metadata: ("create meta
>>> file 'outbox' failed: [Errno 5] Input/output
error",)
>>>
>>> 2015-09-22 18:46:48,745 INFO
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>> (default task-23) [6410419] FINISH,
>>> CreateStorageDomainVDSCommand, log id: b9fe587
>>>
>>> 2015-09-22 18:46:48,745 ERROR
>>>
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>> (default task-23) [6410419] Command
>>>
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
>>> failed: EngineException:
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
>>> VDSGenericException: VDSErrorException: Failed in
>>> vdscommand to CreateStorageDomainVDS, error = Error
>>> creating a storage domain's metadata: ("create meta
>>> file 'outbox' failed: [Errno 5] Input/output
>>> error",) (Failed with error
>>> StorageDomainMetadataCreationError and code 362)
>>>
>>> 2015-09-22 18:46:48,755 INFO
>>>
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>> (default task-23) [6410419] Command
>>> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
>>> Compensating NEW_ENTITY_ID of
>>>
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
>>> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>>>
>>> 2015-09-22 18:46:48,758 INFO
>>>
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>> (default task-23) [6410419] Command
>>> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
>>> Compensating NEW_ENTITY_ID of
>>>
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
>>> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>>>
>>> 2015-09-22 18:46:48,769 ERROR
>>>
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>> (default task-23) [6410419] Transaction rolled-back
>>> for command
>>>
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.
>>>
>>> 2015-09-22 18:46:48,784 ERROR
>>>
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (default task-23) [6410419] Correlation ID:
>>> 6410419, Job ID:
>>> 78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack:
>>> null, Custom Event ID: -1, Message: Failed to add
>>> Storage Domain sjcvmstore. (User: admin@internal)
>>>
>>> 2015-09-22 18:46:48,996 INFO
>>>
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>> (default task-32) [1635a244] Lock Acquired to
>>> object
>>>
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
>>> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>> sharedLocks='null'}'
>>>
>>> 2015-09-22 18:46:49,018 INFO
>>>
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>> (default task-32) [1635a244] Running command:
>>> RemoveStorageServerConnectionCommand internal:
>>> false. Entities affected : ID:
>>> aaa00000-0000-0000-0000-123456789aaa Type:
>>> SystemAction group CREATE_STORAGE_DOMAIN with role
>>> type ADMIN
>>>
>>> 2015-09-22 18:46:49,024 INFO
>>>
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>> (default task-32) [1635a244] Removing connection
>>> 'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from database
>>>
>>> 2015-09-22 18:46:49,026 INFO
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
>>> (default task-32) [1635a244] START,
>>> DisconnectStorageServerVDSCommand(HostName =
>>> sjcvhost03,
>>>
StorageServerConnectionManagementVDSParameters:{runAsync='true',
>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>
storagePoolId='00000000-0000-0000-0000-000000000000',
>>> storageType='GLUSTERFS',
>>>
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
>>> connection='sjcstorage01:/vmstore',
iqn='null',
>>> vfsType='glusterfs', mountOptions='null',
>>> nfsVersion='null', nfsRetrans='null',
>>> nfsTimeo='null', iface='null',
>>> netIfaceName='null'}]'}), log id: 39d3b568
>>>
>>> 2015-09-22 18:46:49,248 INFO
>>>
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
>>> (default task-32) [1635a244] FINISH,
>>> DisconnectStorageServerVDSCommand, return:
>>> {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id:
>>> 39d3b568
>>>
>>> 2015-09-22 18:46:49,252 INFO
>>>
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>> (default task-32) [1635a244] Lock freed to object
>>>
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
>>> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>> sharedLocks='null'}'
>>>
>>> 2015-09-22 18:46:49,431 INFO
>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>> (DefaultQuartzScheduler_Worker-3) [] START,
>>> GlusterVolumesListVDSCommand(HostName =
>>> sjcstorage01,
>>> GlusterVolumesListVDSParameters:{runAsync='true',
>>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
>>> log id: 17014ae8
>>>
>>> 2015-09-22 18:46:49,511 WARN
>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>> (DefaultQuartzScheduler_Worker-3) [] Could not
>>> associate brick
>>> 'sjcstorage01:/export/vmstore/brick01' of volume
>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>> network as no gluster network found in cluster
>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>
>>> 2015-09-22 18:46:49,515 WARN
>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>> (DefaultQuartzScheduler_Worker-3) [] Could not
>>> associate brick
>>> 'sjcstorage02:/export/vmstore/brick01' of volume
>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>> network as no gluster network found in cluster
>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>
>>> 2015-09-22 18:46:49,516 WARN
>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>> (DefaultQuartzScheduler_Worker-3) [] Could not add
>>> brick 'sjcvhost02:/export/vmstore/brick01' to
>>> volume '030f270a-0999-4df4-9b14-ae56eb0a2fb9' -
>>> server uuid '29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
>>> not found in cluster
>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>
>>> 2015-09-22 18:46:49,516 INFO
>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>> (DefaultQuartzScheduler_Worker-3) [] FINISH,
>>> GlusterVolumesListVDSCommand, return:
>>>
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
>>> log id: 17014ae8
>>>
>>>
>>>
>>> ovirt engine thinks that sjcstorage01 is
>>> sjcstorage01, its all testbed at the moment and is
>>> all short names, defined in /etc/hosts (all copied
>>> to each server for consistancy)
>>>
>>>
>>> volume info for vmstore is
>>>
>>>
>>> Status of volume: vmstore
>>>
>>> Gluster process TCP Port RDMA Port Online Pid
>>>
>>>
------------------------------------------------------------------------------
>>>
>>> Brick sjcstorage01:/export/vmstore/brick01 49157
>>> 0 Y 7444
>>>
>>> Brick sjcstorage02:/export/vmstore/brick01 49157
>>> 0 Y 4063
>>>
>>> Brick sjcvhost02:/export/vmstore/brick01 49156
>>> 0 Y 3243
>>>
>>> NFS Server on localhost 2049 0 Y
>>> 3268
>>>
>>> Self-heal Daemon on localhost N/A
>>> N/A Y 3284
>>>
>>> NFS Server on sjcstorage01 2049 0
>>> Y 7463
>>>
>>> Self-heal Daemon on sjcstorage01 N/A N/A
>>> Y 7472
>>>
>>> NFS Server on sjcstorage02 2049 0
>>> Y 4082
>>>
>>> Self-heal Daemon on sjcstorage02 N/A N/A
>>> Y 4090
>>>
>>> Task Status of Volume vmstore
>>>
>>>
------------------------------------------------------------------------------
>>>
>>> There are no active volume tasks
>>>
>>>
>>>
>>> vdsm logs from time the domain is added
>>>
>>>
>>> hread-789::DEBUG::2015-09-22
>>>
19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Thread-790::DEBUG::2015-09-22
>>>
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
>>> from state init -> state preparing
>>>
>>> Thread-790::INFO::2015-09-22
>>> 19:12:07,797::logUtils::48::dispatcher::(wrapper)
>>> Run and protect: repoStats(options=None)
>>>
>>> Thread-790::INFO::2015-09-22
>>> 19:12:07,797::logUtils::51::dispatcher::(wrapper)
>>> Run and protect: repoStats, Return response: {}
>>>
>>> Thread-790::DEBUG::2015-09-22
>>>
19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished:
>>> {}
>>>
>>> Thread-790::DEBUG::2015-09-22
>>>
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
>>> from state preparing -> state finished
>>>
>>> Thread-790::DEBUG::2015-09-22
>>>
19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>>
>>> Thread-790::DEBUG::2015-09-22
>>>
19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>>
>>> Thread-790::DEBUG::2015-09-22
>>> 19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0
>>> aborting False
>>>
>>> Thread-790::DEBUG::2015-09-22
>>>
19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Reactor thread::INFO::2015-09-22
>>>
19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>> Accepting connection from 127.0.0.1:52510
>>> <
http://127.0.0.1:52510>
>>>
>>> Reactor thread::DEBUG::2015-09-22
>>>
19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>> Using required_size=11
>>>
>>> Reactor thread::INFO::2015-09-22
>>>
19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>> Detected protocol xml from 127.0.0.1:52510
>>> <
http://127.0.0.1:52510>
>>>
>>> Reactor thread::DEBUG::2015-09-22
>>>
19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>> xml over http detected from ('127.0.0.1', 52510)
>>>
>>> BindingXMLRPC::INFO::2015-09-22
>>> 19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>> Starting request handler for 127.0.0.1:52510
>>> <
http://127.0.0.1:52510>
>>>
>>> Thread-791::INFO::2015-09-22
>>>
19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>> Request handler for 127.0.0.1:52510
>>> <
http://127.0.0.1:52510> started
>>>
>>> Thread-791::INFO::2015-09-22
>>>
19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>> Request handler for 127.0.0.1:52510
>>> <
http://127.0.0.1:52510> stopped
>>>
>>> Thread-792::DEBUG::2015-09-22
>>>
19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Thread-793::DEBUG::2015-09-22
>>>
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
>>> from state init -> state preparing
>>>
>>> Thread-793::INFO::2015-09-22
>>> 19:12:22,832::logUtils::48::dispatcher::(wrapper)
>>> Run and protect: repoStats(options=None)
>>>
>>> Thread-793::INFO::2015-09-22
>>> 19:12:22,832::logUtils::51::dispatcher::(wrapper)
>>> Run and protect: repoStats, Return response: {}
>>>
>>> Thread-793::DEBUG::2015-09-22
>>>
19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished:
>>> {}
>>>
>>> Thread-793::DEBUG::2015-09-22
>>>
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
>>> from state preparing -> state finished
>>>
>>> Thread-793::DEBUG::2015-09-22
>>>
19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>>
>>> Thread-793::DEBUG::2015-09-22
>>>
19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>>
>>> Thread-793::DEBUG::2015-09-22
>>> 19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0
>>> aborting False
>>>
>>> Thread-793::DEBUG::2015-09-22
>>>
19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Reactor thread::INFO::2015-09-22
>>>
19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>> Accepting connection from 127.0.0.1:52511
>>> <
http://127.0.0.1:52511>
>>>
>>> Reactor thread::DEBUG::2015-09-22
>>>
19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>> Using required_size=11
>>>
>>> Reactor thread::INFO::2015-09-22
>>>
19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>> Detected protocol xml from 127.0.0.1:52511
>>> <
http://127.0.0.1:52511>
>>>
>>> Reactor thread::DEBUG::2015-09-22
>>>
19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>> xml over http detected from ('127.0.0.1', 52511)
>>>
>>> BindingXMLRPC::INFO::2015-09-22
>>> 19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>> Starting request handler for 127.0.0.1:52511
>>> <
http://127.0.0.1:52511>
>>>
>>> Thread-794::INFO::2015-09-22
>>>
19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>> Request handler for 127.0.0.1:52511
>>> <
http://127.0.0.1:52511> started
>>>
>>> Thread-794::INFO::2015-09-22
>>>
19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>> Request handler for 127.0.0.1:52511
>>> <
http://127.0.0.1:52511> stopped
>>>
>>> Thread-795::DEBUG::2015-09-22
>>>
19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>> Calling 'StoragePool.connectStorageServer' in
>>> bridge with {u'connectionParams': [{u'id':
>>> u'00000000-0000-0000-0000-000000000000',
>>> u'connection': u'sjcstorage01:/vmstore',
u'iqn':
>>> u'', u'user': u'', u'tpgt':
u'1', u'vfs_type':
>>> u'glusterfs', u'password':
'********', u'port':
>>> u''}], u'storagepoolID':
>>> u'00000000-0000-0000-0000-000000000000',
>>> u'domainType': 7}
>>>
>>> Thread-795::DEBUG::2015-09-22
>>>
19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
>>> from state init -> state preparing
>>>
>>> Thread-795::INFO::2015-09-22
>>> 19:12:35,521::logUtils::48::dispatcher::(wrapper)
>>> Run and protect: connectStorageServer(domType=7,
>>> spUUID=u'00000000-0000-0000-0000-000000000000',
>>> conList=[{u'id':
>>> u'00000000-0000-0000-0000-000000000000',
>>> u'connection': u'sjcstorage01:/vmstore',
u'iqn':
>>> u'', u'user': u'', u'tpgt':
u'1', u'vfs_type':
>>> u'glusterfs', u'password':
'********', u'port':
>>> u''}], options=None)
>>>
>>> Thread-795::DEBUG::2015-09-22
>>> 19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir)
>>> Creating directory:
>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
>>> mode: None
>>>
>>> Thread-795::DEBUG::2015-09-22
>>> 19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd)
>>> /usr/bin/sudo -n /usr/bin/systemd-run --scope
>>> --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs
>>> -o backup-volfile-servers=sjcstorage02:sjcvhost02
>>> sjcstorage01:/vmstore
>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
>>> (cwd None)
>>>
>>> Thread-795::DEBUG::2015-09-22
>>> 19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
>>> glusterDomPath: glusterSD/*
>>>
>>> Thread-796::DEBUG::2015-09-22
>>> 19:12:35,707::__init__::298::IOProcessClient::(_run)
Starting
>>> IOProcess...
>>>
>>> Thread-797::DEBUG::2015-09-22
>>> 19:12:35,712::__init__::298::IOProcessClient::(_run)
Starting
>>> IOProcess...
>>>
>>> Thread-795::DEBUG::2015-09-22
>>> 19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains)
>>> Found SD uuids: ()
>>>
>>> Thread-795::DEBUG::2015-09-22
>>> 19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer)
>>> knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
>>> storage.glusterSD.findDomain,
>>> 597d5b5b-7c09-4de9-8840-6993bd9b61a6:
>>> storage.glusterSD.findDomain,
>>> ef17fec4-fecf-4d7e-b815-d1db4ef65225:
>>> storage.glusterSD.findDomain}
>>>
>>> Thread-795::INFO::2015-09-22
>>> 19:12:35,721::logUtils::51::dispatcher::(wrapper)
>>> Run and protect: connectStorageServer, Return
>>> response: {'statuslist': [{'status': 0,
'id':
>>> u'00000000-0000-0000-0000-000000000000'}]}
>>>
>>> Thread-795::DEBUG::2015-09-22
>>>
19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished:
>>> {'statuslist': [{'status': 0, 'id':
>>> u'00000000-0000-0000-0000-000000000000'}]}
>>>
>>> Thread-795::DEBUG::2015-09-22
>>>
19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
>>> from state preparing -> state finished
>>>
>>> Thread-795::DEBUG::2015-09-22
>>>
19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>>
>>> Thread-795::DEBUG::2015-09-22
>>>
19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>>
>>> Thread-795::DEBUG::2015-09-22
>>> 19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0
>>> aborting False
>>>
>>> Thread-795::DEBUG::2015-09-22
>>>
19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
>>> Return 'StoragePool.connectStorageServer' in bridge
>>> with [{'status': 0, 'id':
>>> u'00000000-0000-0000-0000-000000000000'}]
>>>
>>> Thread-795::DEBUG::2015-09-22
>>>
19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Thread-798::DEBUG::2015-09-22
>>>
19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>> Calling 'StoragePool.connectStorageServer' in
>>> bridge with {u'connectionParams': [{u'id':
>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>> u'connection': u'sjcstorage01:/vmstore',
u'iqn':
>>> u'', u'user': u'', u'tpgt':
u'1', u'vfs_type':
>>> u'glusterfs', u'password':
'********', u'port':
>>> u''}], u'storagepoolID':
>>> u'00000000-0000-0000-0000-000000000000',
>>> u'domainType': 7}
>>>
>>> Thread-798::DEBUG::2015-09-22
>>>
19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
>>> from state init -> state preparing
>>>
>>> Thread-798::INFO::2015-09-22
>>> 19:12:35,776::logUtils::48::dispatcher::(wrapper)
>>> Run and protect: connectStorageServer(domType=7,
>>> spUUID=u'00000000-0000-0000-0000-000000000000',
>>> conList=[{u'id':
>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>> u'connection': u'sjcstorage01:/vmstore',
u'iqn':
>>> u'', u'user': u'', u'tpgt':
u'1', u'vfs_type':
>>> u'glusterfs', u'password':
'********', u'port':
>>> u''}], options=None)
>>>
>>> Thread-798::DEBUG::2015-09-22
>>> 19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
>>> glusterDomPath: glusterSD/*
>>>
>>> Thread-798::DEBUG::2015-09-22
>>> 19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains)
>>> Found SD uuids: ()
>>>
>>> Thread-798::DEBUG::2015-09-22
>>> 19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer)
>>> knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
>>> storage.glusterSD.findDomain,
>>> 597d5b5b-7c09-4de9-8840-6993bd9b61a6:
>>> storage.glusterSD.findDomain,
>>> ef17fec4-fecf-4d7e-b815-d1db4ef65225:
>>> storage.glusterSD.findDomain}
>>>
>>> Thread-798::INFO::2015-09-22
>>> 19:12:35,782::logUtils::51::dispatcher::(wrapper)
>>> Run and protect: connectStorageServer, Return
>>> response: {'statuslist': [{'status': 0,
'id':
>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>
>>> Thread-798::DEBUG::2015-09-22
>>>
19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished:
>>> {'statuslist': [{'status': 0, 'id':
>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>
>>> Thread-798::DEBUG::2015-09-22
>>>
19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
>>> from state preparing -> state finished
>>>
>>> Thread-798::DEBUG::2015-09-22
>>>
19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>>
>>> Thread-798::DEBUG::2015-09-22
>>>
19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>>
>>> Thread-798::DEBUG::2015-09-22
>>> 19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0
>>> aborting False
>>>
>>> Thread-798::DEBUG::2015-09-22
>>>
19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
>>> Return 'StoragePool.connectStorageServer' in bridge
>>> with [{'status': 0, 'id':
>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>>>
>>> Thread-798::DEBUG::2015-09-22
>>>
19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>> Calling 'StorageDomain.create' in bridge with
>>> {u'name': u'sjcvmstore01',
u'domainType': 7,
>>> u'domainClass': 1, u'typeArgs':
>>> u'sjcstorage01:/vmstore', u'version':
u'3',
>>> u'storagedomainID':
>>> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
>>> from state init -> state preparing
>>>
>>> Thread-801::INFO::2015-09-22
>>> 19:12:35,788::logUtils::48::dispatcher::(wrapper)
>>> Run and protect: createStorageDomain(storageType=7,
>>> sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
>>> domainName=u'sjcvmstore01',
>>> typeSpecificArg=u'sjcstorage01:/vmstore',
>>> domClass=1, domVersion=u'3', options=None)
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
>>> Trying to enter sampling method
>>> (storage.sdc.refreshStorage)
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
>>> Got in to sampling method
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
>>> Trying to enter sampling method (storage.iscsi.rescan)
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
>>> Got in to sampling method
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,788::iscsi::431::Storage.ISCSI::(rescan)
>>> Performing SCSI scan, this will take up to 30 seconds
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
>>> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd
>>> None)
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,821::misc::743::Storage.SamplingMethod::(__call__)
>>> Returning last result
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,821::misc::733::Storage.SamplingMethod::(__call__)
>>> Trying to enter sampling method (storage.hba.rescan)
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,821::misc::736::Storage.SamplingMethod::(__call__)
>>> Got in to sampling method
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,821::hba::56::Storage.HBA::(rescan)
>>> Starting scan
>>>
>>> Thread-802::DEBUG::2015-09-22
>>>
19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,912::hba::62::Storage.HBA::(rescan) Scan
>>> finished
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,912::misc::743::Storage.SamplingMethod::(__call__)
>>> Returning last result
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
>>> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan)
>>> SUCCESS: <err> = ''; <rc> = 0
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,936::utils::661::root::(execCmd)
>>> /sbin/udevadm settle --timeout=5 (cwd None)
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,946::utils::679::root::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' got the
>>> operation mutex
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' released the
>>> operation mutex
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' got the
>>> operation mutex
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' released the
>>> operation mutex
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' got the
>>> operation mutex
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' released the
>>> operation mutex
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,948::misc::743::Storage.SamplingMethod::(__call__)
>>> Returning last result
>>>
>>> Thread-801::ERROR::2015-09-22
>>>
19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
>>> looking for unfetched domain
>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>>>
>>> Thread-801::ERROR::2015-09-22
>>>
19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
>>> looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs)
>>> Operation 'lvm reload operation' got the operation
>>> mutex
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd)
>>> /usr/bin/sudo -n /usr/sbin/lvm vgs --config '
>>> devices { preferred_names = ["^/dev/mapper/"]
>>> ignore_suspended_devices=1 write_cache_state=0
>>> disable_after_error_count=3
>>> obtain_device_list_from_udev=0 filter = [
>>> '\''r|.*|'\'' ] } global {
locking_type=1
>>> prioritise_write_locks=1 wait_for_locks=1
>>> use_lvmetad=0 } backup { retain_min = 50
>>> retain_days = 0 } ' --noheadings --units b
>>> --nosuffix --separator '|' --ignoreskippedcluster
>>> -o
>>>
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd)
>>> FAILED: <err> = ' WARNING: lvmetad is running but
>>> disabled. Restart lvmetad before enabling it!\n
>>> Volume group
>>> "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found\n
>>> Cannot process volume group
>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5
>>>
>>> Thread-801::WARNING::2015-09-22
>>> 19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs)
>>> lvm vgs failed: 5 [] [' WARNING: lvmetad is
>>> running but disabled. Restart lvmetad before
>>> enabling it!', ' Volume group
>>> "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not
found',
>>> ' Cannot process volume group
>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs)
>>> Operation 'lvm reload operation' released the
>>> operation mutex
>>>
>>> Thread-801::ERROR::2015-09-22
>>>
19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
>>> domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found
>>>
>>> Traceback (most recent call last):
>>>
>>> File "/usr/share/vdsm/storage/sdc.py", line 142,
>>> in _findDomain
>>>
>>> dom = findMethod(sdUUID)
>>>
>>> File "/usr/share/vdsm/storage/sdc.py", line 172,
>>> in _findUnfetchedDomain
>>>
>>> raise se.StorageDomainDoesNotExist(sdUUID)
>>>
>>> StorageDomainDoesNotExist: Storage domain does not
>>> exist: (u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)
>>>
>>> Thread-801::INFO::2015-09-22
>>> 19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
>>> sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>>> domainName=sjcvmstore01
>>> remotePath=sjcstorage01:/vmstore domClass=1
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:36,015::__init__::298::IOProcessClient::(_run)
Starting
>>> IOProcess...
>>>
>>> Thread-801::ERROR::2015-09-22
>>>
19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected
>>> error
>>>
>>> Traceback (most recent call last):
>>>
>>> File "/usr/share/vdsm/storage/task.py", line
873,
>>> in _run
>>>
>>> return fn(*args, **kargs)
>>>
>>> File "/usr/share/vdsm/logUtils.py", line 49, in
>>> wrapper
>>>
>>> res = f(*args, **kwargs)
>>>
>>> File "/usr/share/vdsm/storage/hsm.py", line
2697,
>>> in createStorageDomain
>>>
>>> domVersion)
>>>
>>> File "/usr/share/vdsm/storage/nfsSD.py", line
84,
>>> in create
>>>
>>> remotePath, storageType, version)
>>>
>>> File "/usr/share/vdsm/storage/fileSD.py", line
>>> 264, in _prepareMetadata
>>>
>>> "create meta file '%s' failed: %s" %
(metaFile,
>>> str(e)))
>>>
>>> StorageDomainMetadataCreationError: Error creating
>>> a storage domain's metadata: ("create meta file
>>> 'outbox' failed: [Errno 5] Input/output
error",)
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
>>> d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
>>> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
>>> u'sjcvmstore01', u'sjcstorage01:/vmstore', 1,
u'3')
>>> {} failed - stopping task
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping
>>> in state preparing (force False)
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1
>>> aborting True
>>>
>>> Thread-801::INFO::2015-09-22
>>>
19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting:
>>> Task is aborted: "Error creating a storage domain's
>>> metadata" - code 362
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare:
>>> aborted: Error creating a storage domain's metadata
>>>
>>> Thread-801::DEBUG::2015-09-22
>>> 19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0
>>> aborting True
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort:
>>> force False
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
>>> from state preparing -> state aborting
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting:
>>> recover policy none
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
>>> from state aborting -> state failed
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>>
>>> Thread-801::ERROR::2015-09-22
>>> 19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper)
>>> {'status': {'message': 'Error creating a
storage
>>> domain\'s metadata: ("create meta file
\'outbox\'
>>> failed: [Errno 5] Input/output error",)',
'code': 362}}
>>>
>>> Thread-801::DEBUG::2015-09-22
>>>
19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>> Calling 'StoragePool.disconnectStorageServer' in
>>> bridge with {u'connectionParams': [{u'id':
>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>> u'connection': u'sjcstorage01:/vmstore',
u'iqn':
>>> u'', u'user': u'', u'tpgt':
u'1', u'vfs_type':
>>> u'glusterfs', u'password':
'********', u'port':
>>> u''}], u'storagepoolID':
>>> u'00000000-0000-0000-0000-000000000000',
>>> u'domainType': 7}
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
>>> from state init -> state preparing
>>>
>>> Thread-807::INFO::2015-09-22
>>> 19:12:36,182::logUtils::48::dispatcher::(wrapper)
>>> Run and protect: disconnectStorageServer(domType=7,
>>> spUUID=u'00000000-0000-0000-0000-000000000000',
>>> conList=[{u'id':
>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>> u'connection': u'sjcstorage01:/vmstore',
u'iqn':
>>> u'', u'user': u'', u'tpgt':
u'1', u'vfs_type':
>>> u'glusterfs', u'password':
'********', u'port':
>>> u''}], options=None)
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd)
>>> /usr/bin/sudo -n /usr/bin/umount -f -l
>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
>>> (cwd None)
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
>>> Trying to enter sampling method
>>> (storage.sdc.refreshStorage)
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
>>> Got in to sampling method
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
>>> Trying to enter sampling method (storage.iscsi.rescan)
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
>>> Got in to sampling method
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,223::iscsi::431::Storage.ISCSI::(rescan)
>>> Performing SCSI scan, this will take up to 30 seconds
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
>>> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd
>>> None)
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,258::misc::743::Storage.SamplingMethod::(__call__)
>>> Returning last result
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,258::misc::733::Storage.SamplingMethod::(__call__)
>>> Trying to enter sampling method (storage.hba.rescan)
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,258::misc::736::Storage.SamplingMethod::(__call__)
>>> Got in to sampling method
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,258::hba::56::Storage.HBA::(rescan)
>>> Starting scan
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,350::hba::62::Storage.HBA::(rescan) Scan
>>> finished
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,350::misc::743::Storage.SamplingMethod::(__call__)
>>> Returning last result
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
>>> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan)
>>> SUCCESS: <err> = ''; <rc> = 0
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,374::utils::661::root::(execCmd)
>>> /sbin/udevadm settle --timeout=5 (cwd None)
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,383::utils::679::root::(execCmd) SUCCESS:
>>> <err> = ''; <rc> = 0
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' got the
>>> operation mutex
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
>>> Operation 'lvm invalidate operation' released the
>>> operation mutex
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' got the
>>> operation mutex
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
>>> Operation 'lvm invalidate operation' released the
>>> operation mutex
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' got the
>>> operation mutex
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
>>> Operation 'lvm invalidate operation' released the
>>> operation mutex
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,386::misc::743::Storage.SamplingMethod::(__call__)
>>> Returning last result
>>>
>>> Thread-807::INFO::2015-09-22
>>> 19:12:36,386::logUtils::51::dispatcher::(wrapper)
>>> Run and protect: disconnectStorageServer, Return
>>> response: {'statuslist': [{'status': 0,
'id':
>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished:
>>> {'statuslist': [{'status': 0, 'id':
>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
>>> from state preparing -> state finished
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>>
>>> Thread-807::DEBUG::2015-09-22
>>> 19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0
>>> aborting False
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
>>> Return 'StoragePool.disconnectStorageServer' in
>>> bridge with [{'status': 0, 'id':
>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>>>
>>> Thread-807::DEBUG::2015-09-22
>>>
19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Thread-808::DEBUG::2015-09-22
>>>
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
>>> from state init -> state preparing
>>>
>>> Thread-808::INFO::2015-09-22
>>> 19:12:37,868::logUtils::48::dispatcher::(wrapper)
>>> Run and protect: repoStats(options=None)
>>>
>>> Thread-808::INFO::2015-09-22
>>> 19:12:37,868::logUtils::51::dispatcher::(wrapper)
>>> Run and protect: repoStats, Return response: {}
>>>
>>> Thread-808::DEBUG::2015-09-22
>>>
19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished:
>>> {}
>>>
>>> Thread-808::DEBUG::2015-09-22
>>>
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
>>> from state preparing -> state finished
>>>
>>> Thread-808::DEBUG::2015-09-22
>>>
19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>>
>>> Thread-808::DEBUG::2015-09-22
>>>
19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>>
>>> Thread-808::DEBUG::2015-09-22
>>> 19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0
>>> aborting False
>>>
>>> Thread-808::DEBUG::2015-09-22
>>>
19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Reactor thread::INFO::2015-09-22
>>>
19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>> Accepting connection from 127.0.0.1:52512
>>> <
http://127.0.0.1:52512>
>>>
>>> Reactor thread::DEBUG::2015-09-22
>>>
19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>> Using required_size=11
>>>
>>> Reactor thread::INFO::2015-09-22
>>>
19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>> Detected protocol xml from 127.0.0.1:52512
>>> <
http://127.0.0.1:52512>
>>>
>>> Reactor thread::DEBUG::2015-09-22
>>>
19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>> xml over http detected from ('127.0.0.1', 52512)
>>>
>>> BindingXMLRPC::INFO::2015-09-22
>>> 19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>> Starting request handler for 127.0.0.1:52512
>>> <
http://127.0.0.1:52512>
>>>
>>> Thread-809::INFO::2015-09-22
>>>
19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>> Request handler for 127.0.0.1:52512
>>> <
http://127.0.0.1:52512> started
>>>
>>> Thread-809::INFO::2015-09-22
>>>
19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>> Request handler for 127.0.0.1:52512
>>> <
http://127.0.0.1:52512> stopped
>>>
>>> Thread-810::DEBUG::2015-09-22
>>>
19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Thread-811::DEBUG::2015-09-22
>>>
19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
>>> from state init -> state preparing
>>>
>>> Thread-811::INFO::2015-09-22
>>> 19:12:52,902::logUtils::48::dispatcher::(wrapper)
>>> Run and protect: repoStats(options=None)
>>>
>>> Thread-811::INFO::2015-09-22
>>> 19:12:52,902::logUtils::51::dispatcher::(wrapper)
>>> Run and protect: repoStats, Return response: {}
>>>
>>> Thread-811::DEBUG::2015-09-22
>>>
19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished:
>>> {}
>>>
>>> Thread-811::DEBUG::2015-09-22
>>>
19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
>>> from state preparing -> state finished
>>>
>>> Thread-811::DEBUG::2015-09-22
>>>
19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>>
>>> Thread-811::DEBUG::2015-09-22
>>>
19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>>
>>> Thread-811::DEBUG::2015-09-22
>>> 19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0
>>> aborting False
>>>
>>> Thread-811::DEBUG::2015-09-22
>>>
19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Reactor thread::INFO::2015-09-22
>>>
19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>> Accepting connection from 127.0.0.1:52513
>>> <
http://127.0.0.1:52513>
>>>
>>> Reactor thread::DEBUG::2015-09-22
>>>
19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>> Using required_size=11
>>>
>>> Reactor thread::INFO::2015-09-22
>>>
19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>> Detected protocol xml from 127.0.0.1:52513
>>> <
http://127.0.0.1:52513>
>>>
>>> Reactor thread::DEBUG::2015-09-22
>>>
19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>> xml over http detected from ('127.0.0.1', 52513)
>>>
>>> BindingXMLRPC::INFO::2015-09-22
>>> 19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>> Starting request handler for 127.0.0.1:52513
>>> <
http://127.0.0.1:52513>
>>>
>>> Thread-812::INFO::2015-09-22
>>>
19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>> Request handler for 127.0.0.1:52513
>>> <
http://127.0.0.1:52513> started
>>>
>>> Thread-812::INFO::2015-09-22
>>>
19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>> Request handler for 127.0.0.1:52513
>>> <
http://127.0.0.1:52513> stopped
>>>
>>> Thread-813::DEBUG::2015-09-22
>>>
19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Thread-814::DEBUG::2015-09-22
>>>
19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
>>> from state init -> state preparing
>>>
>>> Thread-814::INFO::2015-09-22
>>> 19:13:07,935::logUtils::48::dispatcher::(wrapper)
>>> Run and protect: repoStats(options=None)
>>>
>>> Thread-814::INFO::2015-09-22
>>> 19:13:07,935::logUtils::51::dispatcher::(wrapper)
>>> Run and protect: repoStats, Return response: {}
>>>
>>> Thread-814::DEBUG::2015-09-22
>>>
19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished:
>>> {}
>>>
>>> Thread-814::DEBUG::2015-09-22
>>>
19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
>>> from state preparing -> state finished
>>>
>>> Thread-814::DEBUG::2015-09-22
>>>
19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>> Owner.releaseAll requests {} resources {}
>>>
>>> Thread-814::DEBUG::2015-09-22
>>>
19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>> Owner.cancelAll requests {}
>>>
>>> Thread-814::DEBUG::2015-09-22
>>> 19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0
>>> aborting False
>>>
>>> Thread-814::DEBUG::2015-09-22
>>>
19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>> Reactor thread::INFO::2015-09-22
>>>
19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>> Accepting connection from 127.0.0.1:52515
>>> <
http://127.0.0.1:52515>
>>>
>>> Reactor thread::DEBUG::2015-09-22
>>>
19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>> Using required_size=11
>>>
>>> Reactor thread::INFO::2015-09-22
>>>
19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>> Detected protocol xml from 127.0.0.1:52515
>>> <
http://127.0.0.1:52515>
>>>
>>> Reactor thread::DEBUG::2015-09-22
>>>
19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>> xml over http detected from ('127.0.0.1', 52515)
>>>
>>> BindingXMLRPC::INFO::2015-09-22
>>> 19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>> Starting request handler for 127.0.0.1:52515
>>> <
http://127.0.0.1:52515>
>>>
>>> Thread-815::INFO::2015-09-22
>>>
19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>> Request handler for 127.0.0.1:52515
>>> <
http://127.0.0.1:52515> started
>>>
>>> Thread-815::INFO::2015-09-22
>>>
19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>> Request handler for 127.0.0.1:52515
>>> <
http://127.0.0.1:52515> stopped
>>>
>>> Thread-816::DEBUG::2015-09-22
>>>
19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send)
>>> Sending response
>>>
>>>
>>>
>>> gluster logs
>>>
>>>
+------------------------------------------------------------------------------+
>>>
>>> 1: volume vmstore-client-0
>>>
>>> 2: type protocol/client
>>>
>>> 3: option ping-timeout 42
>>>
>>> 4: option remote-host sjcstorage01
>>>
>>> 5: option remote-subvolume
>>> /export/vmstore/brick01
>>>
>>> 6: option transport-type socket
>>>
>>> 7: option send-gids true
>>>
>>> 8: end-volume
>>>
>>> 9:
>>>
>>> 10: volume vmstore-client-1
>>>
>>> 11: type protocol/client
>>>
>>> 12: option ping-timeout 42
>>>
>>> 13: option remote-host sjcstorage02
>>>
>>> 14: option remote-subvolume
>>> /export/vmstore/brick01
>>>
>>> 15: option transport-type socket
>>>
>>> 16: option send-gids true
>>>
>>> 17: end-volume
>>>
>>> 18:
>>>
>>> 19: volume vmstore-client-2
>>>
>>> 20: type protocol/client
>>>
>>> 21: option ping-timeout 42
>>>
>>> 22: option remote-host sjcvhost02
>>>
>>> 23: option remote-subvolume
>>> /export/vmstore/brick01
>>>
>>> 24: option transport-type socket
>>>
>>> 25: option send-gids true
>>>
>>> 26: end-volume
>>>
>>> 27:
>>>
>>> 28: volume vmstore-replicate-0
>>>
>>> 29: type cluster/replicate
>>>
>>> 30: option arbiter-count 1
>>>
>>> 31: subvolumes vmstore-client-0 vmstore-client-1
>>> vmstore-client-2
>>>
>>> 32: end-volume
>>>
>>> 33:
>>>
>>> 34: volume vmstore-dht
>>>
>>> 35: type cluster/distribute
>>>
>>> 36: subvolumes vmstore-replicate-0
>>>
>>> 37: end-volume
>>>
>>> 38:
>>>
>>> 39: volume vmstore-write-behind
>>>
>>> 40: type performance/write-behind
>>>
>>> 41: subvolumes vmstore-dht
>>>
>>> 42: end-volume
>>>
>>> 43:
>>>
>>> 44: volume vmstore-read-ahead
>>>
>>> 45: type performance/read-ahead
>>>
>>> 46: subvolumes vmstore-write-behind
>>>
>>> 47: end-volume
>>>
>>> 48:
>>>
>>> 49: volume vmstore-readdir-ahead
>>>
>>> 50: type performance/readdir-ahead
>>>
>>> 52: end-volume
>>>
>>> 53:
>>>
>>> 54: volume vmstore-io-cache
>>>
>>> 55: type performance/io-cache
>>>
>>> 56: subvolumes vmstore-readdir-ahead
>>>
>>> 57: end-volume
>>>
>>> 58:
>>>
>>> 59: volume vmstore-quick-read
>>>
>>> 60: type performance/quick-read
>>>
>>> 61: subvolumes vmstore-io-cache
>>>
>>> 62: end-volume
>>>
>>> 63:
>>>
>>> 64: volume vmstore-open-behind
>>>
>>> 65: type performance/open-behind
>>>
>>> 66: subvolumes vmstore-quick-read
>>>
>>> 67: end-volume
>>>
>>> 68:
>>>
>>> 69: volume vmstore-md-cache
>>>
>>> 70: type performance/md-cache
>>>
>>> 71: subvolumes vmstore-open-behind
>>>
>>> 72: end-volume
>>>
>>> 73:
>>>
>>> 74: volume vmstore
>>>
>>> 75: type debug/io-stats
>>>
>>> 76: option latency-measurement off
>>>
>>> 77: option count-fop-hits off
>>>
>>> 78: subvolumes vmstore-md-cache
>>>
>>> 79: end-volume
>>>
>>> 80:
>>>
>>> 81: volume meta-autoload
>>>
>>> 82: type meta
>>>
>>> 83: subvolumes vmstore
>>>
>>> 84: end-volume
>>>
>>> 85:
>>>
>>>
+------------------------------------------------------------------------------+
>>>
>>> [2015-09-22 05:29:07.586205] I
>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>> 0-vmstore-client-0: changing port to 49153 (from 0)
>>>
>>> [2015-09-22 05:29:07.586325] I
>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>> 0-vmstore-client-1: changing port to 49153 (from 0)
>>>
>>> [2015-09-22 05:29:07.586480] I
>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>> 0-vmstore-client-2: changing port to 49153 (from 0)
>>>
>>> [2015-09-22 05:29:07.595052] I [MSGID: 114057]
>>> [client-handshake.c:1437:select_server_supported_programs]
>>> 0-vmstore-client-0: Using Program GlusterFS 3.3,
>>> Num (1298437), Version (330)
>>>
>>> [2015-09-22 05:29:07.595397] I [MSGID: 114057]
>>> [client-handshake.c:1437:select_server_supported_programs]
>>> 0-vmstore-client-1: Using Program GlusterFS 3.3,
>>> Num (1298437), Version (330)
>>>
>>> [2015-09-22 05:29:07.595576] I [MSGID: 114057]
>>> [client-handshake.c:1437:select_server_supported_programs]
>>> 0-vmstore-client-2: Using Program GlusterFS 3.3,
>>> Num (1298437), Version (330)
>>>
>>> [2015-09-22 05:29:07.595721] I [MSGID: 114046]
>>> [client-handshake.c:1213:client_setvolume_cbk]
>>> 0-vmstore-client-0: Connected to vmstore-client-0,
>>> attached to remote volume '/export/vmstore/brick01'.
>>>
>>> [2015-09-22 05:29:07.595738] I [MSGID: 114047]
>>> [client-handshake.c:1224:client_setvolume_cbk]
>>> 0-vmstore-client-0: Server and Client lk-version
>>> numbers are not same, reopening the fds
>>>
>>> [2015-09-22 05:29:07.596044] I [MSGID: 108005]
>>> [afr-common.c:3998:afr_notify]
>>> 0-vmstore-replicate-0: Subvolume 'vmstore-client-0'
>>> came back up; going online.
>>>
>>> [2015-09-22 05:29:07.596170] I [MSGID: 114046]
>>> [client-handshake.c:1213:client_setvolume_cbk]
>>> 0-vmstore-client-1: Connected to vmstore-client-1,
>>> attached to remote volume '/export/vmstore/brick01'.
>>>
>>> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
>>> [client-handshake.c:1224:client_setvolume_cbk]
>>> 0-vmstore-client-1: Server and Client lk-version
>>> numbers are not same, reopening the fds
>>>
>>> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
>>> [client-handshake.c:1213:client_setvolume_cbk]
>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>> attached to remote volume :
>>>
>>> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
>>> [client-handshake.c:1224:client_setvolume_cbk]
>>> 0-vmstore-client-1: Server and Client lk-version
>>> numbers are not same, reopening the fds
>>>
>>> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
>>> [client-handshake.c:1213:client_setvolume_cbk]
>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>> attached to remote volume '/export/vmstore/brick01'.
>>>
>>> [2015-09-22 05:29:07.596506] I [MSGID: 114047]
>>> [client-handshake.c:1224:client_setvolume_cbk]
>>> 0-vmstore-client-2: Server and Client lk-version
>>> numbers are not same, reopening the fds
>>>
>>> [2015-09-22 05:29:07.608758] I
>>> [fuse-bridge.c:5053:fuse_graph_setup] 0-fuse:
>>> switched to graph 0
>>>
>>> [2015-09-22 05:29:07.608910] I [MSGID: 114035]
>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>> 0-vmstore-client-0: Server lk version = 1
>>>
>>> [2015-09-22 05:29:07.608936] I [MSGID: 114035]
>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>> 0-vmstore-client-1: Server lk version = 1
>>>
>>> [2015-09-22 05:29:07.608950] I [MSGID: 114035]
>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>> 0-vmstore-client-2: Server lk version = 1
>>>
>>> [2015-09-22 05:29:07.609695] I [MSGID: 101190]
>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>> 0-epoll: Started thread with index 2
>>>
>>> [2015-09-22 05:29:07.609868] I
>>> [fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse:
>>> FUSE inited with protocol versions: glusterfs 7.22
>>> kernel 7.22
>>>
>>> [2015-09-22 05:29:07.616577] I [MSGID: 109063]
>>> [dht-layout.c:702:dht_layout_normalize]
>>> 0-vmstore-dht: Found anomalies in / (gfid =
>>> 00000000-0000-0000-0000-000000000001). Holes=1
>>> overlaps=0
>>>
>>> [2015-09-22 05:29:07.620230] I [MSGID: 109036]
>>> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
>>> 0-vmstore-dht: Setting layout of / with
>>> [Subvol_name: vmstore-replicate-0, Err: -1 , Start:
>>> 0 , Stop: 4294967295 , Hash: 1 ],
>>>
>>> [2015-09-22 05:29:08.122415] W
>>> [fuse-bridge.c:1230:fuse_err_cbk] 0-glusterfs-fuse:
>>> 26: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No
>>> data available)
>>>
>>> [2015-09-22 05:29:08.137359 <tel:137359>] I [MSGID:
>>> 109036]
>>> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
>>> 0-vmstore-dht: Setting layout of
>>> /061b73d5-ae59-462e-b674-ea9c60d436c2 with
>>> [Subvol_name: vmstore-replicate-0, Err: -1 , Start:
>>> 0 , Stop: 4294967295 , Hash: 1 ],
>>>
>>> [2015-09-22 05:29:08.145835] I [MSGID: 109036]
>>> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
>>> 0-vmstore-dht: Setting layout of
>>> /061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
>>> [Subvol_name: vmstore-replicate-0, Err: -1 , Start:
>>> 0 , Stop: 4294967295 , Hash: 1 ],
>>>
>>> [2015-09-22 05:30:57.897819] I [MSGID: 100030]
>>> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs:
>>> Started running /usr/sbin/glusterfs version 3.7.4
>>> (args: /usr/sbin/glusterfs
>>> --volfile-server=sjcvhost02
>>> --volfile-server=sjcstorage01
>>> --volfile-server=sjcstorage02 --volfile-id=/vmstore
>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>>>
>>> [2015-09-22 05:30:57.909889] I [MSGID: 101190]
>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>> 0-epoll: Started thread with index 1
>>>
>>> [2015-09-22 05:30:57.923087] I [MSGID: 114020]
>>> [client.c:2118:notify] 0-vmstore-client-0: parent
>>> translators are ready, attempting connect on transport
>>>
>>> [2015-09-22 05:30:57.925701] I [MSGID: 114020]
>>> [client.c:2118:notify] 0-vmstore-client-1: parent
>>> translators are ready, attempting connect on transport
>>>
>>> [2015-09-22 05:30:57.927984] I [MSGID: 114020]
>>> [client.c:2118:notify] 0-vmstore-client-2: parent
>>> translators are ready, attempting connect on transport
>>>
>>> Final graph:
>>>
>>>
+------------------------------------------------------------------------------+
>>>
>>> 1: volume vmstore-client-0
>>>
>>> 2: type protocol/client
>>>
>>> 3: option ping-timeout 42
>>>
>>> 4: option remote-host sjcstorage01
>>>
>>> 5: option remote-subvolume
>>> /export/vmstore/brick01
>>>
>>> 6: option transport-type socket
>>>
>>> 7: option send-gids true
>>>
>>> 8: end-volume
>>>
>>> 9:
>>>
>>> 10: volume vmstore-client-1
>>>
>>> 11: type protocol/client
>>>
>>> 12: option ping-timeout 42
>>>
>>> 13: option remote-host sjcstorage02
>>>
>>> 14: option remote-subvolume
>>> /export/vmstore/brick01
>>>
>>> 15: option transport-type socket
>>>
>>> 16: option send-gids true
>>>
>>> 17: end-volume
>>>
>>> 18:
>>>
>>> 19: volume vmstore-client-2
>>>
>>> 20: type protocol/client
>>>
>>> 21: option ping-timeout 42
>>>
>>> 22: option remote-host sjcvhost02
>>>
>>> 23: option remote-subvolume
>>> /export/vmstore/brick01
>>>
>>> 24: option transport-type socket
>>>
>>> 25: option send-gids true
>>>
>>> 26: end-volume
>>>
>>> 27:
>>>
>>> 28: volume vmstore-replicate-0
>>>
>>> 29: type cluster/replicate
>>>
>>> 30: option arbiter-count 1
>>>
>>> 31: subvolumes vmstore-client-0 vmstore-client-1
>>> vmstore-client-2
>>>
>>> 32: end-volume
>>>
>>> 33:
>>>
>>> 34: volume vmstore-dht
>>>
>>> 35: type cluster/distribute
>>>
>>> 36: subvolumes vmstore-replicate-0
>>>
>>> 37: end-volume
>>>
>>> 38:
>>>
>>> 39: volume vmstore-write-behind
>>>
>>> 40: type performance/write-behind
>>>
>>> 41: subvolumes vmstore-dht
>>>
>>> 42: end-volume
>>>
>>> 43:
>>>
>>> 44: volume vmstore-read-ahead
>>>
>>> 45: type performance/read-ahead
>>>
>>> 46: subvolumes vmstore-write-behind
>>>
>>> 47: end-volume
>>>
>>> 48:
>>>
>>> 49: volume vmstore-readdir-ahead
>>>
>>> 50: type performance/readdir-ahead
>>>
>>> 51: subvolumes vmstore-read-ahead
>>>
>>> 52: end-volume
>>>
>>> 53:
>>>
>>> 54: volume vmstore-io-cache
>>>
>>> 55: type performance/io-cache
>>>
>>> 56: subvolumes vmstore-readdir-ahead
>>>
>>> 57: end-volume
>>>
>>> 58:
>>>
>>> 59: volume vmstore-quick-read
>>>
>>> 60: type performance/quick-read
>>>
>>> 61: subvolumes vmstore-io-cache
>>>
>>> 62: end-volume
>>>
>>> 63:
>>>
>>> 64: volume vmstore-open-behind
>>>
>>> 65: type performance/open-behind
>>>
>>> 66: subvolumes vmstore-quick-read
>>>
>>> 67: end-volume
>>>
>>> 68:
>>>
>>> 69: volume vmstore-md-cache
>>>
>>> 70: type performance/md-cache
>>>
>>> 71: subvolumes vmstore-open-behind
>>>
>>> 72: end-volume
>>>
>>> 73:
>>>
>>> 74: volume vmstore
>>>
>>> 75: type debug/io-stats
>>>
>>> 76: option latency-measurement off
>>>
>>> 77: option count-fop-hits off
>>>
>>> 78: subvolumes vmstore-md-cache
>>>
>>> 79: end-volume
>>>
>>> 80:
>>>
>>> 81: volume meta-autoload
>>>
>>> 82: type meta
>>>
>>> 83: subvolumes vmstore
>>>
>>> 84: end-volume
>>>
>>> 85:
>>>
>>>
+------------------------------------------------------------------------------+
>>>
>>> [2015-09-22 05:30:57.934021] I
>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>> 0-vmstore-client-0: changing port to 49153 (from 0)
>>>
>>> [2015-09-22 05:30:57.934145] I
>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>> 0-vmstore-client-1: changing port to 49153 (from 0)
>>>
>>> [2015-09-22 05:30:57.934491] I
>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>> 0-vmstore-client-2: changing port to 49153 (from 0)
>>>
>>> [2015-09-22 05:30:57.942198] I [MSGID: 114057]
>>> [client-handshake.c:1437:select_server_supported_programs]
>>> 0-vmstore-client-0: Using Program GlusterFS 3.3,
>>> Num (1298437), Version (330)
>>>
>>> [2015-09-22 05:30:57.942545] I [MSGID: 114057]
>>> [client-handshake.c:1437:select_server_supported_programs]
>>> 0-vmstore-client-1: Using Program GlusterFS 3.3,
>>> Num (1298437), Version (330)
>>>
>>> [2015-09-22 05:30:57.942659] I [MSGID: 114057]
>>> [client-handshake.c:1437:select_server_supported_programs]
>>> 0-vmstore-client-2: Using Program GlusterFS 3.3,
>>> Num (1298437), Version (330)
>>>
>>> [2015-09-22 05:30:57.942797] I [MSGID: 114046]
>>> [client-handshake.c:1213:client_setvolume_cbk]
>>> 0-vmstore-client-0: Connected to vmstore-client-0,
>>> attached to remote volume '/export/vmstore/brick01'.
>>>
>>> [2015-09-22 05:30:57.942808] I [MSGID: 114047]
>>> [client-handshake.c:1224:client_setvolume_cbk]
>>> 0-vmstore-client-0: Server and Client lk-version
>>> numbers are not same, reopening the fds
>>>
>>> [2015-09-22 05:30:57.943036] I [MSGID: 108005]
>>> [afr-common.c:3998:afr_notify]
>>> 0-vmstore-replicate-0: Subvolume 'vmstore-client-0'
>>> came back up; going online.
>>>
>>> [2015-09-22 05:30:57.943078] I [MSGID: 114046]
>>> [client-handshake.c:1213:client_setvolume_cbk]
>>> 0-vmstore-client-1: Connected to vmstore-client-1,
>>> attached to remote volume '/export/vmstore/brick01'.
>>>
>>> [2015-09-22 05:30:57.943086] I [MSGID: 114047]
>>> [client-handshake.c:1224:client_setvolume_cbk]
>>> 0-vmstore-client-1: Server and Client lk-version
>>> numbers are not same, reopening the fds
>>>
>>> [2015-09-22 05:30:57.943292] I [MSGID: 114046]
>>> [client-handshake.c:1213:client_setvolume_cbk]
>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>> attached to remote volume '/export/vmstore/brick01'.
>>>
>>> [2015-09-22 05:30:57.943302] I [MSGID: 114047]
>>> [client-handshake.c:1224:client_setvolume_cbk]
>>> 0-vmstore-client-2: Server and Client lk-version
>>> numbers are not same, reopening the fds
>>>
>>> [2015-09-22 05:30:57.953887] I
>>> [fuse-bridge.c:5053:fuse_graph_setup] 0-fuse:
>>> switched to graph 0
>>>
>>> [2015-09-22 05:30:57.954071] I [MSGID: 114035]
>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>> 0-vmstore-client-0: Server lk version = 1
>>>
>>> [2015-09-22 05:30:57.954105] I [MSGID: 114035]
>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>> 0-vmstore-client-1: Server lk version = 1
>>>
>>> [2015-09-22 05:30:57.954124] I [MSGID: 114035]
>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>> 0-vmstore-client-2: Server lk version = 1
>>>
>>> [2015-09-22 05:30:57.955282] I
>>> [fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse:
>>> FUSE inited with protocol versions: glusterfs 7.22
>>> kernel 7.22
>>>
>>> [2015-09-22 05:30:57.955738] I [MSGID: 101190]
>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>> 0-epoll: Started thread with index 2
>>>
>>> [2015-09-22 05:30:57.970232] I
>>> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse:
>>> unmounting
>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>>>
>>> [2015-09-22 05:30:57.970834] W
>>> [glusterfsd.c:1219:cleanup_and_exit]
>>> (-->/lib64/libpthread.so.0(+0x7df5)
>>> [0x7f187139fdf5]
>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>>> [0x7f1872a09785]
>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
>>> [0x7f1872a09609] ) 0-: received signum (15),
>>> shutting down
>>>
>>> [2015-09-22 05:30:57.970848] I
>>> [fuse-bridge.c:5595:fini] 0-fuse: Unmounting
>>>
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>>>
>>> [2015-09-22 05:30:58.420973] I
>>> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse:
>>> unmounting
>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>>>
>>> [2015-09-22 05:30:58.421355] W
>>> [glusterfsd.c:1219:cleanup_and_exit]
>>> (-->/lib64/libpthread.so.0(+0x7df5)
>>> [0x7f8267cd4df5]
>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>>> [0x7f826933e785]
>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
>>> [0x7f826933e609] ) 0-: received signum (15),
>>> shutting down
>>>
>>> [2015-09-22 05:30:58.421369] I
>>> [fuse-bridge.c:5595:fini] 0-fuse: Unmounting
>>>
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>>>
>>> [2015-09-22 05:31:09.534410] I [MSGID: 100030]
>>> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs:
>>> Started running /usr/sbin/glusterfs version 3.7.4
>>> (args: /usr/sbin/glusterfs
>>> --volfile-server=sjcvhost02
>>> --volfile-server=sjcstorage01
>>> --volfile-server=sjcstorage02 --volfile-id=/vmstore
>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>>>
>>> [2015-09-22 05:31:09.545686] I [MSGID: 101190]
>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>> 0-epoll: Started thread with index 1
>>>
>>> [2015-09-22 05:31:09.553019] I [MSGID: 114020]
>>> [client.c:2118:notify] 0-vmstore-client-0: parent
>>> translators are ready, attempting connect on transport
>>>
>>> [2015-09-22 05:31:09.555552] I [MSGID: 114020]
>>> [client.c:2118:notify] 0-vmstore-client-1: parent
>>> translators are ready, attempting connect on transport
>>>
>>> [2015-09-22 05:31:09.557989] I [MSGID: 114020]
>>> [client.c:2118:notify] 0-vmstore-client-2: parent
>>> translators are ready, attempting connect on transport
>>>
>>> Final graph:
>>>
>>>
+------------------------------------------------------------------------------+
>>>
>>> 1: volume vmstore-client-0
>>>
>>> 2: type protocol/client
>>>
>>> 3: option ping-timeout 42
>>>
>>> 4: option remote-host sjcstorage01
>>>
>>> 5: option remote-subvolume
>>> /export/vmstore/brick01
>>>
>>> 6: option transport-type socket
>>>
>>> 7: option send-gids true
>>>
>>> 8: end-volume
>>>
>>> 9:
>>>
>>> 10: volume vmstore-client-1
>>>
>>> 11: type protocol/client
>>>
>>> 12: option ping-timeout 42
>>>
>>> 13: option remote-host sjcstorage02
>>>
>>> 14: option remote-subvolume
>>> /export/vmstore/brick01
>>>
>>> 15: option transport-type socket
>>>
>>> 16: option send-gids true
>>>
>>> 17: end-volume
>>>
>>> 18:
>>>
>>> 19: volume vmstore-client-2
>>>
>>> 20: type protocol/client
>>>
>>> 21: option ping-timeout 42
>>>
>>> 22: option remote-host sjcvhost02
>>>
>>> 23: option remote-subvolume
>>> /export/vmstore/brick01
>>>
>>> 24: option transport-type socket
>>>
>>> 25: option send-gids true
>>>
>>> 26: end-volume
>>>
>>> 27:
>>>
>>> 28: volume vmstore-replicate-0
>>>
>>> 29: type cluster/replicate
>>>
>>> 30: option arbiter-count 1
>>>
>>> 31: subvolumes vmstore-client-0 vmstore-client-1
>>> vmstore-client-2
>>>
>>> 32: end-volume
>>>
>>> 33:
>>>
>>> 34: volume vmstore-dht
>>>
>>> 35: type cluster/distribute
>>>
>>> 36: subvolumes vmstore-replicate-0
>>>
>>> 37: end-volume
>>>
>>> 38:
>>>
>>> 39: volume vmstore-write-behind
>>>
>>> 40: type performance/write-behind
>>>
>>> 41: subvolumes vmstore-dht
>>>
>>> 42: end-volume
>>>
>>> 43:
>>>
>>> 44: volume vmstore-read-ahead
>>>
>>> 45: type performance/read-ahead
>>>
>>> 46: subvolumes vmstore-write-behind
>>>
>>> 47: end-volume
>>>
>>> 48:
>>>
>>> 49: volume vmstore-readdir-ahead
>>>
>>> 50: type performance/readdir-ahead
>>>
>>> 51: subvolumes vmstore-read-ahead
>>>
>>> 52: end-volume
>>>
>>> 53:
>>>
>>> 54: volume vmstore-io-cache
>>>
>>> 55: type performance/io-cache
>>>
>>> 56: subvolumes vmstore-readdir-ahead
>>>
>>> 57: end-volume
>>>
>>> 58:
>>>
>>> 59: volume vmstore-quick-read
>>>
>>> 60: type performance/quick-read
>>>
>>> 61: subvolumes vmstore-io-cache
>>>
>>> 62: end-volume
>>>
>>> 63:
>>>
>>> 64: volume vmstore-open-behind
>>>
>>> 65: type performance/open-behind
>>>
>>> 66: subvolumes vmstore-quick-read
>>>
>>> 67: end-volume
>>>
>>> 68:
>>>
>>> 69: volume vmstore-md-cache
>>>
>>> 70: type performance/md-cache
>>>
>>> 71: subvolumes vmstore-open-behind
>>>
>>> 72: end-volume
>>>
>>> 73:
>>>
>>> 74: volume vmstore
>>>
>>> 75: type debug/io-stats
>>>
>>> 76: option latency-measurement off
>>>
>>> 77: option count-fop-hits off
>>>
>>> 78: subvolumes vmstore-md-cache
>>>
>>> 79: end-volume
>>>
>>> 80:
>>>
>>> 81: volume meta-autoload
>>>
>>> 82: type meta
>>>
>>> 83: subvolumes vmstore
>>>
>>> 84: end-volume
>>>
>>> 85:
>>>
>>>
+------------------------------------------------------------------------------+
>>>
>>> [2015-09-22 05:31:09.563262] I
>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>> 0-vmstore-client-0: changing port to 49153 (from 0)
>>>
>>> [2015-09-22 05:31:09.563431] I
>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>> 0-vmstore-client-1: changing port to 49153 (from 0)
>>>
>>> [2015-09-22 05:31:09.563877] I
>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>> 0-vmstore-client-2: changing port to 49153 (from 0)
>>>
>>> [2015-09-22 05:31:09.572443] I [MSGID: 114057]
>>> [client-handshake.c:1437:select_server_supported_programs]
>>> 0-vmstore-client-1: Using Program GlusterFS 3.3,
>>> Num (1298437), Version (330)
>>>
>>> [2015-09-22 05:31:09.572599] I [MSGID: 114057]
>>> [client-handshake.c:1437:select_server_supported_programs]
>>> 0-vmstore-client-0: Using Program GlusterFS 3.3,
>>> Num (1298437), Version (330)
>>>
>>> [2015-09-22 05:31:09.572742] I [MSGID: 114057]
>>> [client-handshake.c:1437:select_server_supported_programs]
>>> 0-vmstore-client-2: Using Program GlusterFS 3.3,
>>> Num (1298437), Version (330)
>>>
>>> [2015-09-22 05:31:09.573165] I [MSGID: 114046]
>>> [client-handshake.c:1213:client_setvolume_cbk]
>>> 0-vmstore-client-1: Connected to vmstore-client-1,
>>> attached to remote volume '/export/vmstore/brick01'.
>>>
>>> [2015-09-22 05:31:09.573186] I [MSGID: 114047]
>>> [client-handshake.c:1224:client_setvolume_cbk]
>>> 0-vmstore-client-1: Server and Client lk-version
>>> numbers are not same, reopening the fds
>>>
>>> [2015-09-22 05:31:09.573395] I [MSGID: 108005]
>>> [afr-common.c:3998:afr_notify]
>>> 0-vmstore-replicate-0: Subvolume 'vmstore-client-1'
>>> came back up; going online.
>>>
>>> [2015-09-22 05:31:09.573427] I [MSGID: 114046]
>>> [client-handshake.c:1213:client_setvolume_cbk]
>>> 0-vmstore-client-0: Connected to vmstore-client-0,
>>> attached to remote volume '/export/vmstore/brick01'.
>>>
>>> [2015-09-22 05:31:09.573435] I [MSGID: 114047]
>>> [client-handshake.c:1224:client_setvolume_cbk]
>>> 0-vmstore-client-0: Server and Client lk-version
>>> numbers are not same, reopening the fds
>>>
>>> [2015-09-22 05:31:09.573754] I [MSGID: 114046]
>>> [client-handshake.c:1213:client_setvolume_cbk]
>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>> attached to remote volume '/export/vmstore/brick01'.
>>>
>>> [2015-09-22 05:31:09.573783] I [MSGID: 114047]
>>> [client-handshake.c:1224:client_setvolume_cbk]
>>> 0-vmstore-client-2: Server and Client lk-version
>>> numbers are not same, reopen:
>>>
>>> [2015-09-22 05:31:09.577192] I
>>> [fuse-bridge.c:5053:fuse_graph_setup] 0-fuse:
>>> switched to graph 0
>>>
>>> [2015-09-22 05:31:09.577302] I [MSGID: 114035]
>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>> 0-vmstore-client-1: Server lk version = 1
>>>
>>> [2015-09-22 05:31:09.577325] I [MSGID: 114035]
>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>> 0-vmstore-client-0: Server lk version = 1
>>>
>>> [2015-09-22 05:31:09.577339] I [MSGID: 114035]
>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>> 0-vmstore-client-2: Server lk version = 1
>>>
>>> [2015-09-22 05:31:09.578125] I
>>> [fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse:
>>> FUSE inited with protocol versions: glusterfs 7.22
>>> kernel 7.22
>>>
>>> [2015-09-22 05:31:09.578636] I [MSGID: 101190]
>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>> 0-epoll: Started thread with index 2
>>>
>>> [2015-09-22 05:31:10.073698] I
>>> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse:
>>> unmounting
>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>>>
>>> [2015-09-22 05:31:10.073977] W
>>> [glusterfsd.c:1219:cleanup_and_exit]
>>> (-->/lib64/libpthread.so.0(+0x7df5)
>>> [0x7f6b9ba88df5]
>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>>> [0x7f6b9d0f2785]
>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
>>> [0x7f6b9d0f2609] ) 0-: received signum (15),
>>> shutting down
>>>
>>> [2015-09-22 05:31:10.073993] I
>>> [fuse-bridge.c:5595:fini] 0-fuse: Unmounting
>>>
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>>>
>>> [2015-09-22 05:31:20.184700] I [MSGID: 100030]
>>> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs:
>>> Started running /usr/sbin/glusterfs version 3.7.4
>>> (args: /usr/sbin/glusterfs
>>> --volfile-server=sjcvhost02
>>> --volfile-server=sjcstorage01
>>> --volfile-server=sjcstorage02 --volfile-id=/vmstore
>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>>>
>>> [2015-09-22 05:31:20.194928] I [MSGID: 101190]
>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>> 0-epoll: Started thread with index 1
>>>
>>> [2015-09-22 05:31:20.200701] I [MSGID: 114020]
>>> [client.c:2118:notify] 0-vmstore-client-0: parent
>>> translators are ready, attempting connect on transport
>>>
>>> [2015-09-22 05:31:20.203110] I [MSGID: 114020]
>>> [client.c:2118:notify] 0-vmstore-client-1: parent
>>> translators are ready, attempting connect on transport
>>>
>>> [2015-09-22 05:31:20.205708] I [MSGID: 114020]
>>> [client.c:2118:notify] 0-vmstore-client-2: parent
>>> translators are ready, attempting connect on transport
>>>
>>> Final graph:
>>>
>>>
>>>
>>> Hope this helps.
>>>
>>>
>>> thanks again
>>>
>>>
>>> Brett Stevens
>>>
>>>
>>>
>>> On Tue, Sep 22, 2015 at 10:14 PM, Sahina Bose
>>> <sabose(a)redhat.com> wrote:
>>>
>>>
>>>
>>> On 09/22/2015 02:17 PM, Brett Stevens wrote:
>>>> Hi. First time on the lists. I've searched for
>>>> this but no luck so sorry if this has been
>>>> covered before.
>>>>
>>>> Im working with the latest 3.6 beta with the
>>>> following infrastructure.
>>>>
>>>> 1 management host (to be used for a number of
>>>> tasks so chose not to use self hosted, we are
>>>> a school and will need to keep an eye on
>>>> hardware costs)
>>>> 2 compute nodes
>>>> 2 gluster nodes
>>>>
>>>> so far built one gluster volume using the
>>>> gluster cli to give me 2 nodes and one arbiter
>>>> node (management host)
>>>>
>>>> so far, every time I create a volume, it shows
>>>> up strait away on the ovirt gui. however no
>>>> matter what I try, I cannot create or import
>>>> it as a data domain.
>>>>
>>>> the current error in the ovirt gui is "Error
>>>> while executing action
>>>> AddGlusterFsStorageDomain: Error creating a
>>>> storage domain's metadata"
>>>
>>> Please provide vdsm and gluster logs
>>>
>>>>
>>>> logs, continuously rolling the following
>>>> errors around
>>>>
>>>> Scheduler_Worker-53) [] START,
>>>> GlusterVolumesListVDSCommand(HostName =
>>>> sjcstorage02,
>>>>
GlusterVolumesListVDSParameters:{runAsync='true',
>>>>
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
>>>> log id: 24198fbf
>>>>
>>>> 2015-09-22 03:57:29,903 WARN
>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-53) [] Could
>>>> not associate brick
>>>> 'sjcstorage01:/export/vmstore/brick01' of
>>>> volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
>>>> with correct network as no gluster network
>>>> found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>
>>> What is the hostname provided in ovirt engine
>>> for sjcstorage01 ? Does this host have multiple
>>> nics?
>>>
>>> Could you provide output of gluster volume info?
>>> Please note, that these errors are not related
>>> to error in creating storage domain. However,
>>> these errors could prevent you from monitoring
>>> the state of gluster volume from oVirt
>>>
>>>> 2015-09-22 03:57:29,905 WARN
>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-53) [] Could
>>>> not associate brick
>>>> 'sjcstorage02:/export/vmstore/brick01' of
>>>> volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
>>>> with correct network as no gluster network
>>>> found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 03:57:29,905 WARN
>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-53) [] Could
>>>> not add brick
>>>> 'sjcvhost02:/export/vmstore/brick01' to
volume
>>>> '878a316d-2394-4aae-bdf8-e10eea38225e' -
>>>> server uuid
>>>> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not
>>>> found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 03:57:29,905 INFO
>>>>
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>> (DefaultQuartzScheduler_Worker-53) [] FINISH,
>>>> GlusterVolumesListVDSCommand, return:
>>>>
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
>>>> log id: 24198fbf
>>>>
>>>>
>>>> I'm new to ovirt and gluster, so any help
>>>> would be great
>>>>
>>>>
>>>> thanks
>>>>
>>>>
>>>> Brett Stevens
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>
>
--------------070407060509090702050906
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
Hi Brett,<br>
Can you truncate the gluster brick and mount logs on all three
nodes, try creating the storage domain again and then share these
logs along with the VDSM logs? <br>
<br>
i.e. on all 3 nodes, <br>
1. echo >
/var/log/glusterfs/<span>rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log</span><br>
2. echo > export-vmstore-brick01.log<br>
3. Create the storage domain (at which point VDSM supposedly fails
with the truncate error)<br>
4. Share the logs.<br>
<br>
Also, what timezone are you in? That would be needed to co-relate
the timestamps in the vdsm log (local time) and gluster log (UTC)<br>
<br>
Thanks!<br>
Ravi<br>
<blockquote cite="mid:560A221C.5030204@redhat.com"
type="cite">
<div class="moz-forward-container"> <br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" border="0"
cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th nowrap="nowrap" valign="BASELINE"
align="RIGHT">Subject:
</th>
<td>Re: [ovirt-users] adding gluster domains</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE"
align="RIGHT">Date:
</th>
<td>Tue, 29 Sep 2015 08:38:49 +1000</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE"
align="RIGHT">From:
</th>
<td>Brett Stevens <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:gorttman@i3sec.com.au"><gorttman@i3sec.com.au></a></td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE"
align="RIGHT">Reply-To:
</th>
<td><a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:brett@i3sec.com.au">brett@i3sec.com.au</a></td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE"
align="RIGHT">To: </th>
<td>Sahina Bose <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:sabose@redhat.com"><sabose@redhat.com></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<div dir="ltr">Sorry about the delay, I've run the truncate.
I'm
not sure what results you were expecting, but it executed
fine, no delays no errors no problems.
<div><br>
</div>
<div>thanks</div>
<div>Brett Stevens</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 24, 2015 at 7:29 PM,
Brett Stevens <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:gorttman@i3sec.com.au"
target="_blank">gorttman(a)i3sec.com.au</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Thanks I'll do that tomorrow morning.
<div><br>
</div>
<div>Just out of interest, I keep getting warn errors in
the engine.log allong the lines of node not present
(sjcvhost02 which is the arbiter) and no gluster
network present even after I have added the gluster
network option in the network management gui.</div>
<div><br>
</div>
<div>thanks</div>
<div><br>
</div>
<div>Brett Stevens</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 24, 2015 at 7:26
PM, Sahina Bose <span dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com"><a
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com">sabose@redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
Sorry, I
intended to forward it to a gluster devel.<br>
<br>
Btw, there were no errors in the mount log - so
unable to root cause why truncate of file failed
with IO error. Was the log from vhost03 -
/var/log/glusterfs/<span>rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log
?<br>
<br>
We will look into the logs you attached to see
if there are any errors reported at the bricks.
(But there should have been some error in mount
log!)<br>
<br>
Could you also try "truncate -s 10M test" from
the mount point ( manually mount gluster using -
#mount -t glusterfs
</span><span><span>sjcstorage01:/vmstore
<mountpoint>) and report
results.</span><br>
</span><br>
<div>On 09/24/2015 02:32 PM, Brett Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina.
<div><br>
</div>
<div>Something has gone wrong with your last
email. I have received a message from you,
but did not get any text to go with it.
could you resend please?</div>
<div><br>
</div>
<div>thanks</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 24, 2015
at 6:48 PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com"><a
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com">sabose@redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"> <br>
<br>
<div>On 09/24/2015 04:21 AM, Brett
Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina.
<div><br>
</div>
<div>vhost02 is the engine node
vhost03 is the hypervisor
storage01 and 02 the gluster
nodes. I've put arbiter on vhost02</div>
<div><br>
</div>
<div>all tasks are separated (except
engine and arbiter) </div>
<div><br>
</div>
<div>thanks</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Sep
23, 2015 at 9:48 PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com"><a
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com">sabose@redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"> +
ovirt-users<br>
<br>
Some clarity on your setup - <br>
<span>sjcvhost03 - is this
your arbiter node and ovirt
management node? And are you
running a compute + storage
on the same nodes - i.e,
</span><span>sjcstorage01,
</span><span>sjcstorage02,
</span><span>sjcvhost03
(arbiter).<br>
<br>
</span><br>
<span>
CreateStorageDomainVDSCommand(HostName
= sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}),
log id: b9fe587<br>
<br>
- fails with </span><span>Error
creating a storage domain's
metadata: ("create meta file
'outbox' failed: [Errno 5]
Input/output error",<br>
<br>
Are the vdsm logs you
provided from
</span><span>sjcvhost03?
There are no errors to be
seen in the gluster log you
provided. Could you provide
mount log from
</span><span><span>sjcvhost03</span>
(at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log
most likely)<br>
If possible,
/var/log/glusterfs/* from
the 3 storage nodes.<br>
<br>
thanks<br>
sahina<br>
<br>
</span>
<div>On 09/23/2015 05:02 AM,
Brett Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>as requested here is
some logs taken during a
domain create.</div>
<div><br>
</div>
<div>
<p><span>2015-09-22
18:46:44,320 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88)
[] START,
GlusterVolumesListVDSCommand(HostName
= sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id:
2205ff1</span></p>
<p><span>2015-09-22
18:46:44,413 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88)
[] Could not
associate brick
'sjcstorage01:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88)
[] Could not
associate brick
'sjcstorage02:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88)
[] Could not add
brick
'sjcvhost02:/export/vmstore/brick01'
to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
- server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:44,418 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88)
[] FINISH,
GlusterVolumesListVDSCommand,
return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
log id:
2205ff1</span></p>
<p><span>2015-09-22
18:46:45,215 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24)
[5099cda3] Lock
Acquired to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:45,230 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24)
[5099cda3] Running
command:
AddStorageServerConnectionCommand
internal: false.
Entities affected :
ID:
aaa00000-0000-0000-0000-123456789aaa
Type: SystemAction
group
CREATE_STORAGE_DOMAIN
with role type
ADMIN</span></p>
<p><span>2015-09-22
18:46:45,233 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24)
[5099cda3] START,
ConnectStorageServerVDSCommand(HostName
= sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='null',
connection='sjcstorage01:/vmstore',
iqn='null',
vfsType='glusterfs',
mountOptions='null',
nfsVersion='null',
nfsRetrans='null',
nfsTimeo='null',
iface='null',
netIfaceName='null'}]'}),
log id:
6a112292</span></p>
<p><span>2015-09-22
18:46:48,065 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24)
[5099cda3] FINISH,
ConnectStorageServerVDSCommand,
return:
{00000000-0000-0000-0000-000000000000=0},
log id:
6a112292</span></p>
<p><span>2015-09-22
18:46:48,073 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24)
[5099cda3] Lock
freed to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:48,188 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Running
command:
AddGlusterFsStorageDomainCommand
internal: false.
Entities affected :
ID:
aaa00000-0000-0000-0000-123456789aaa
Type: SystemAction
group
CREATE_STORAGE_DOMAIN
with role type
ADMIN</span></p>
<p><span>2015-09-22
18:46:48,206 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23)
[6410419] START,
ConnectStorageServerVDSCommand(HostName
= sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore',
iqn='null',
vfsType='glusterfs',
mountOptions='null',
nfsVersion='null',
nfsRetrans='null',
nfsTimeo='null',
iface='null',
netIfaceName='null'}]'}),
log id:
38a2b0d</span></p>
<p><span>2015-09-22
18:46:48,219 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23)
[6410419] FINISH,
ConnectStorageServerVDSCommand,
return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0},
log id:
38a2b0d</span></p>
<p><span>2015-09-22
18:46:48,221 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] START,
CreateStorageDomainVDSCommand(HostName
= sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}),
log id:
b9fe587</span></p>
<p><span>2015-09-22
18:46:48,744 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23)
[6410419]
Correlation ID:
null, Call Stack:
null, Custom Event
ID: -1, Message:
VDSM sjcvhost03
command failed:
Error creating a
storage domain's
metadata: ("create
meta file 'outbox'
failed: [Errno 5]
Input/output
error",)</span></p>
<p><span>2015-09-22
18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
return value
'StatusOnlyReturnForXmlRpc
[status=StatusForXmlRpc
[code=362,
message=Error
creating a storage
domain's metadata:
("create meta file
'outbox' failed:
[Errno 5]
Input/output
error",)]]'</span></p>
<p><span>2015-09-22
18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] HostName =
sjcvhost03</span></p>
<p><span>2015-09-22
18:46:48,745 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] Command
'CreateStorageDomainVDSCommand(HostName
= sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'})'
execution failed:
VDSGenericException:
VDSErrorException:
Failed in vdscommand
to
CreateStorageDomainVDS,
error = Error
creating a storage
domain's metadata:
("create meta file
'outbox' failed:
[Errno 5]
Input/output
error",)</span></p>
<p><span>2015-09-22
18:46:48,745 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] FINISH,
CreateStorageDomainVDSCommand,
log id:
b9fe587</span></p>
<p><span>2015-09-22
18:46:48,745 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException:
VDSErrorException:
Failed in vdscommand
to
CreateStorageDomainVDS,
error = Error
creating a storage
domain's metadata:
("create meta file
'outbox' failed:
[Errno 5]
Input/output
error",) (Failed
with error
StorageDomainMetadataCreationError
and code 362)</span></p>
<p><span>2015-09-22
18:46:48,755 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
snapshot:
597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p><span>2015-09-22
18:46:48,758 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot:
597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p><span>2015-09-22
18:46:48,769 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419]
Transaction
rolled-back for
command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.</span></p>
<p><span>2015-09-22
18:46:48,784 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23)
[6410419]
Correlation ID:
6410419, Job ID:
78692780-a06f-49a5-b6b1-e6c24a820d62,
Call Stack: null,
Custom Event ID: -1,
Message: Failed to
add Storage Domain
sjcvmstore. (User:
admin@internal)</span></p>
<p><span>2015-09-22
18:46:48,996 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Lock
Acquired to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:49,018 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Running
command:
RemoveStorageServerConnectionCommand
internal: false.
Entities affected :
ID:
aaa00000-0000-0000-0000-123456789aaa
Type: SystemAction
group
CREATE_STORAGE_DOMAIN
with role type
ADMIN</span></p>
<p><span>2015-09-22
18:46:49,024 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Removing
connection
'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e'
from database </span></p>
<p><span>2015-09-22
18:46:49,026 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32)
[1635a244] START,
DisconnectStorageServerVDSCommand(HostName
= sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore',
iqn='null',
vfsType='glusterfs',
mountOptions='null',
nfsVersion='null',
nfsRetrans='null',
nfsTimeo='null',
iface='null',
netIfaceName='null'}]'}),
log id:
39d3b568</span></p>
<p><span>2015-09-22
18:46:49,248 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32)
[1635a244] FINISH,
DisconnectStorageServerVDSCommand,
return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0},
log id:
39d3b568</span></p>
<p><span>2015-09-22
18:46:49,252 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Lock
freed to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:49,431 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3)
[] START,
GlusterVolumesListVDSCommand(HostName
= sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id:
17014ae8</span></p>
<p><span>2015-09-22
18:46:49,511 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3)
[] Could not
associate brick
'sjcstorage01:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:49,515 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3)
[] Could not
associate brick
'sjcstorage02:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:49,516 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3)
[] Could not add
brick
'sjcvhost02:/export/vmstore/brick01'
to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
- server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:49,516 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3)
[] FINISH,
GlusterVolumesListVDSCommand,
return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
log id:
17014ae8</span></p>
<p><span><br>
</span></p>
<p><span><br>
</span></p>
<p><span>ovirt engine
thinks that
sjcstorage01 is
sjcstorage01, its
all testbed at the
moment and is all
short names, defined
in /etc/hosts (all
copied to each
server for
consistancy)</span></p>
<p><span><br>
</span></p>
<p><span>volume info for
vmstore is</span></p>
<p><span><br>
</span></p>
<p><span>Status of
volume:
vmstore</span></p>
<p><span>Gluster process
TCP Port
RDMA Port Online
Pid</span></p>
<p><span>------------------------------------------------------------------------------</span></p>
<p><span>Brick
sjcstorage01:/export/vmstore/brick01
49157 0
Y 7444 </span></p>
<p><span>Brick
sjcstorage02:/export/vmstore/brick01
49157 0
Y 4063 </span></p>
<p><span>Brick
sjcvhost02:/export/vmstore/brick01
49156 0
Y
3243 </span></p>
<p><span>NFS Server on
localhost
2049
0 Y
3268 </span></p>
<p><span>Self-heal
Daemon on localhost
N/A
N/A Y
3284 </span></p>
<p><span>NFS Server on
sjcstorage01
2049
0 Y
7463 </span></p>
<p><span>Self-heal
Daemon on
sjcstorage01
N/A N/A
Y
7472 </span></p>
<p><span>NFS Server on
sjcstorage02
2049
0 Y
4082 </span></p>
<p><span>Self-heal
Daemon on
sjcstorage02
N/A N/A
Y
4090 </span></p>
<p><span> </span></p>
<p><span>Task Status of
Volume vmstore</span></p>
<p><span>------------------------------------------------------------------------------</span></p>
<p> </p>
<p><span>There are no
active volume
tasks</span></p>
<p><span><br>
</span></p>
<p><span><br>
</span></p>
<p><span>vdsm logs from
time the domain is
added</span></p>
<p><span><br>
</span></p>
<p>hread-789::DEBUG::2015-09-22
19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
from state init ->
state preparing</p>
<p>Thread-790::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:07,797::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-790::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:07,797::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished:
{}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
from state preparing
-> state finished</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref
0 aborting False</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52510"
target="_blank">127.0.0.1:52510</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52510"
target="_blank">127.0.0.1:52510</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52510)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52510"
target="_blank">127.0.0.1:52510</a></p>
<p>Thread-791::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52510"
target="_blank">127.0.0.1:52510</a>
started</p>
<p>Thread-791::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52510"
target="_blank">127.0.0.1:52510</a>
stopped</p>
<p>Thread-792::DEBUG::2015-09-22
19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
from state init ->
state preparing</p>
<p>Thread-793::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:22,832::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-793::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:22,832::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished:
{}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
from state preparing
-> state finished</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref
0 aborting False</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52511"
target="_blank">127.0.0.1:52511</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52511"
target="_blank">127.0.0.1:52511</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52511)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52511"
target="_blank">127.0.0.1:52511</a></p>
<p>Thread-794::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52511"
target="_blank">127.0.0.1:52511</a>
started</p>
<p>Thread-794::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52511"
target="_blank">127.0.0.1:52511</a>
stopped</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StoragePool.connectStorageServer'
in bridge with
{u'connectionParams':
[{u'id':
u'00000000-0000-0000-0000-000000000000',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'',
u'user':
u'', u'tpgt':
u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}],
u'storagepoolID':
u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
from state init ->
state preparing</p>
<p>Thread-795::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,521::logUtils::48::dispatcher::(wrapper)
Run and protect:
connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id':
u'00000000-0000-0000-0000-000000000000',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'',
u'user':
u'', u'tpgt':
u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}], options=None)</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir)
Creating directory:
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
mode: None</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n
/usr/bin/systemd-run
--scope
--slice=vdsm-glusterfs
/usr/bin/mount -t
glusterfs -o
backup-volfile-servers=sjcstorage02:sjcvhost02
sjcstorage01:/vmstore
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
(cwd None)</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath:
glusterSD/*</p>
<p>Thread-796::DEBUG::2015-09-22
19:12:35,707::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p>Thread-797::DEBUG::2015-09-22
19:12:35,712::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs:
{41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p>Thread-795::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,721::logUtils::51::dispatcher::(wrapper)
Run and protect:
connectStorageServer,
Return response:
{'statuslist':
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished:
{'statuslist':
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
from state preparing
-> state finished</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref
0 aborting False</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return
'StoragePool.connectStorageServer'
in bridge with
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StoragePool.connectStorageServer'
in bridge with
{u'connectionParams':
[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'',
u'user':
u'', u'tpgt':
u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}],
u'storagepoolID':
u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
from state init ->
state preparing</p>
<p>Thread-798::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,776::logUtils::48::dispatcher::(wrapper)
Run and protect:
connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'',
u'user':
u'', u'tpgt':
u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}], options=None)</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath:
glusterSD/*</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs:
{41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p>Thread-798::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,782::logUtils::51::dispatcher::(wrapper)
Run and protect:
connectStorageServer,
Return response:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
from state preparing
-> state finished</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref
0 aborting False</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return
'StoragePool.connectStorageServer'
in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StorageDomain.create'
in bridge with
{u'name':
u'sjcvmstore01',
u'domainType': 7,
u'domainClass': 1,
u'typeArgs':
u'sjcstorage01:/vmstore',
u'version': u'3',
u'storagedomainID':
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
from state init ->
state preparing</p>
<p>Thread-801::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,788::logUtils::48::dispatcher::(wrapper)
Run and protect:
createStorageDomain(storageType=7,
sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
domainName=u'sjcvmstore01',
typeSpecificArg=u'sjcstorage01:/vmstore',
domClass=1,
domVersion=u'3',
options=None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.sdc.refreshStorage)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.iscsi.rescan)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsi::431::Storage.ISCSI::(rescan)
Performing SCSI scan,
this will take up to
30 seconds</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n
/sbin/iscsiadm -m
session -R (cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.hba.rescan)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::hba::56::Storage.HBA::(rescan)
Starting scan</p>
<p>Thread-802::DEBUG::2015-09-22
19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,912::hba::62::Storage.HBA::(rescan)
Scan finished</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,912::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n
/usr/sbin/multipath
(cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> =
''; <rc> =
0</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,936::utils::661::root::(execCmd)
/sbin/udevadm settle
--timeout=5 (cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,946::utils::679::root::(execCmd)
SUCCESS: <err> =
''; <rc> =
0</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
looking for unfetched
domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p>Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload
operation' got the
operation mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd)
/usr/bin/sudo -n
/usr/sbin/lvm vgs
--config ' devices {
preferred_names =
["^/dev/mapper/"]
ignore_suspended_devices=1
write_cache_state=0
disable_after_error_count=3
obtain_device_list_from_udev=0
filter = [
'\''r|.*|'\'' ]
}
global {
locking_type=1
prioritise_write_locks=1
wait_for_locks=1
use_lvmetad=0 }
backup { retain_min
= 50 retain_days = 0
} ' --noheadings
--units b --nosuffix
--separator '|'
--ignoreskippedcluster
-o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
(cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd)
FAILED: <err> =
' WARNING: lvmetad is
running but disabled.
Restart lvmetad before
enabling it!\n Volume
group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
not found\n Cannot
process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n';
<rc> = 5</p>
<p>Thread-801::WARNING::2015-09-22
19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs)
lvm vgs failed: 5 []
[' WARNING: lvmetad
is running but
disabled. Restart
lvmetad before
enabling it!', '
Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
not found', ' Cannot
process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload
operation' released
the operation mutex</p>
<p>Thread-801::ERROR::2015-09-22
19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
not found</p>
<p>Traceback (most
recent call last):</p>
<p> File
"/usr/share/vdsm/storage/sdc.py",
line 142, in
_findDomain</p>
<p> dom =
findMethod(sdUUID)</p>
<p> File
"/usr/share/vdsm/storage/sdc.py",
line 172, in
_findUnfetchedDomain</p>
<p> raise
se.StorageDomainDoesNotExist(sdUUID)</p>
<p>StorageDomainDoesNotExist:
Storage domain does
not exist:
(u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)</p>
<p>Thread-801::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
domainName=sjcvmstore01
remotePath=sjcstorage01:/vmstore
domClass=1</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,015::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p>Thread-801::ERROR::2015-09-22
19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected
error</p>
<p>Traceback (most
recent call last):</p>
<p> File
"/usr/share/vdsm/storage/task.py",
line 873, in _run</p>
<p> return fn(*args,
**kargs)</p>
<p> File
"/usr/share/vdsm/logUtils.py",
line 49, in wrapper</p>
<p> res = f(*args,
**kwargs)</p>
<p> File
"/usr/share/vdsm/storage/hsm.py",
line 2697, in
createStorageDomain</p>
<p> domVersion)</p>
<p> File
"/usr/share/vdsm/storage/nfsSD.py",
line 84, in create</p>
<p> remotePath,
storageType, version)</p>
<p> File
"/usr/share/vdsm/storage/fileSD.py",
line 264, in
_prepareMetadata</p>
<p> "create meta file
'%s' failed: %s" %
(metaFile, str(e)))</p>
<p>StorageDomainMetadataCreationError:
Error creating a
storage domain's
metadata: ("create
meta file 'outbox'
failed: [Errno 5]
Input/output error",)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
d2d29352-8677-45cb-a4ab-06aa32cf1acb
(7,
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
u'sjcvmstore01',
u'sjcstorage01:/vmstore',
1, u'3') {} failed -
stopping task</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping
in state preparing
(force False)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref
1 aborting True</p>
<p>Thread-801::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting:
Task is aborted:
"Error creating a
storage domain's
metadata" - code 362</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare:
aborted: Error
creating a storage
domain's metadata</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref
0 aborting True</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort:
force False</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
from state preparing
-> state aborting</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting:
recover policy none</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
from state aborting
-> state failed</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-801::ERROR::2015-09-22
19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message':
'Error creating a
storage domain\'s
metadata: ("create
meta file \'outbox\'
failed: [Errno 5]
Input/output
error",)', 'code':
362}}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StoragePool.disconnectStorageServer'
in bridge with
{u'connectionParams':
[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'',
u'user':
u'', u'tpgt':
u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}],
u'storagepoolID':
u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
from state init ->
state preparing</p>
<p>Thread-807::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:36,182::logUtils::48::dispatcher::(wrapper)
Run and protect:
disconnectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'',
u'user':
u'', u'tpgt':
u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}], options=None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n
/usr/bin/umount -f -l
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
(cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.sdc.refreshStorage)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.iscsi.rescan)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsi::431::Storage.ISCSI::(rescan)
Performing SCSI scan,
this will take up to
30 seconds</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n
/sbin/iscsiadm -m
session -R (cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.hba.rescan)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::hba::56::Storage.HBA::(rescan)
Starting scan</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,350::hba::62::Storage.HBA::(rescan)
Scan finished</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,350::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n
/usr/sbin/multipath
(cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> =
''; <rc> =
0</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,374::utils::661::root::(execCmd)
/sbin/udevadm settle
--timeout=5 (cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,383::utils::679::root::(execCmd)
SUCCESS: <err> =
''; <rc> =
0</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,386::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-807::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:36,386::logUtils::51::dispatcher::(wrapper)
Run and protect:
disconnectStorageServer,
Return response:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
from state preparing
-> state finished</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref
0 aborting False</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return
'StoragePool.disconnectStorageServer'
in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
from state init ->
state preparing</p>
<p>Thread-808::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:37,868::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-808::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:37,868::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished:
{}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
from state preparing
-> state finished</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref
0 aborting False</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52512"
target="_blank">127.0.0.1:52512</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52512"
target="_blank">127.0.0.1:52512</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52512)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52512"
target="_blank">127.0.0.1:52512</a></p>
<p>Thread-809::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52512"
target="_blank">127.0.0.1:52512</a>
started</p>
<p>Thread-809::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52512"
target="_blank">127.0.0.1:52512</a>
stopped</p>
<p>Thread-810::DEBUG::2015-09-22
19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
from state init ->
state preparing</p>
<p>Thread-811::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:52,902::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-811::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:52,902::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished:
{}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
from state preparing
-> state finished</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref
0 aborting False</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52513"
target="_blank">127.0.0.1:52513</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52513"
target="_blank">127.0.0.1:52513</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52513)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52513"
target="_blank">127.0.0.1:52513</a></p>
<p>Thread-812::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52513"
target="_blank">127.0.0.1:52513</a>
started</p>
<p>Thread-812::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52513"
target="_blank">127.0.0.1:52513</a>
stopped</p>
<p>Thread-813::DEBUG::2015-09-22
19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
from state init ->
state preparing</p>
<p>Thread-814::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:07,935::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-814::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:07,935::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished:
{}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
from state preparing
-> state finished</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref
0 aborting False</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52515"
target="_blank">127.0.0.1:52515</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52515"
target="_blank">127.0.0.1:52515</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52515)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52515"
target="_blank">127.0.0.1:52515</a></p>
<p>Thread-815::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52515"
target="_blank">127.0.0.1:52515</a>
started</p>
<p>Thread-815::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a
class="moz-txt-link-freetext"
href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52515"
target="_blank">127.0.0.1:52515</a>
stopped</p>
<p><span></span></p>
<p>Thread-816::DEBUG::2015-09-22
19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
</div>
<div><br>
</div>
<div><br>
</div>
<div>gluster logs</div>
<div><br>
</div>
<div>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span> 1: volume
vmstore-client-0</span></p>
<p><span> 2: type
protocol/client</span></p>
<p><span> 3: option
ping-timeout
42</span></p>
<p><span> 4: option
remote-host
sjcstorage01</span></p>
<p><span> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 6: option
transport-type
socket</span></p>
<p><span> 7: option
send-gids true</span></p>
<p><span> 8:
end-volume</span></p>
<p><span> 9:
</span></p>
<p><span> 10: volume
vmstore-client-1</span></p>
<p><span> 11: type
protocol/client</span></p>
<p><span> 12: option
ping-timeout
42</span></p>
<p><span> 13: option
remote-host
sjcstorage02</span></p>
<p><span> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 15: option
transport-type
socket</span></p>
<p><span> 16: option
send-gids true</span></p>
<p><span> 17:
end-volume</span></p>
<p><span> 18:
</span></p>
<p><span> 19: volume
vmstore-client-2</span></p>
<p><span> 20: type
protocol/client</span></p>
<p><span> 21: option
ping-timeout
42</span></p>
<p><span> 22: option
remote-host
sjcvhost02</span></p>
<p><span> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 24: option
transport-type
socket</span></p>
<p><span> 25: option
send-gids true</span></p>
<p><span> 26:
end-volume</span></p>
<p><span> 27:
</span></p>
<p><span> 28: volume
vmstore-replicate-0</span></p>
<p><span> 29: type
cluster/replicate</span></p>
<p><span> 30: option
arbiter-count
1</span></p>
<p><span> 31:
subvolumes
vmstore-client-0
vmstore-client-1
vmstore-client-2</span></p>
<p><span> 32:
end-volume</span></p>
<p><span> 33:
</span></p>
<p><span> 34: volume
vmstore-dht</span></p>
<p><span> 35: type
cluster/distribute</span></p>
<p><span> 36:
subvolumes
vmstore-replicate-0</span></p>
<p><span> 37:
end-volume</span></p>
<p><span> 38:
</span></p>
<p><span> 39: volume
vmstore-write-behind</span></p>
<p><span> 40: type
performance/write-behind</span></p>
<p><span> 41:
subvolumes
vmstore-dht</span></p>
<p><span> 42:
end-volume</span></p>
<p><span> 43:
</span></p>
<p><span> 44: volume
vmstore-read-ahead</span></p>
<p><span> 45: type
performance/read-ahead</span></p>
<p><span> 46:
subvolumes
vmstore-write-behind</span></p>
<p><span> 47:
end-volume</span></p>
<p><span> 48:
</span></p>
<p><span> 49: volume
vmstore-readdir-ahead</span></p>
<p><span> 50: type
performance/readdir-ahead</span></p>
<p><span>52:
end-volume</span></p>
<p><span> 53:
</span></p>
<p><span> 54: volume
vmstore-io-cache</span></p>
<p><span> 55: type
performance/io-cache</span></p>
<p><span> 56:
subvolumes
vmstore-readdir-ahead</span></p>
<p><span> 57:
end-volume</span></p>
<p><span> 58:
</span></p>
<p><span> 59: volume
vmstore-quick-read</span></p>
<p><span> 60: type
performance/quick-read</span></p>
<p><span> 61:
subvolumes
vmstore-io-cache</span></p>
<p><span> 62:
end-volume</span></p>
<p><span> 63:
</span></p>
<p><span> 64: volume
vmstore-open-behind</span></p>
<p><span> 65: type
performance/open-behind</span></p>
<p><span> 66:
subvolumes
vmstore-quick-read</span></p>
<p><span> 67:
end-volume</span></p>
<p><span> 68:
</span></p>
<p><span> 69: volume
vmstore-md-cache</span></p>
<p><span> 70: type
performance/md-cache</span></p>
<p><span> 71:
subvolumes
vmstore-open-behind</span></p>
<p><span> 72:
end-volume</span></p>
<p><span> 73:
</span></p>
<p><span> 74: volume
vmstore</span></p>
<p><span> 75: type
debug/io-stats</span></p>
<p><span> 76: option
latency-measurement
off</span></p>
<p><span> 77: option
count-fop-hits
off</span></p>
<p><span> 78:
subvolumes
vmstore-md-cache</span></p>
<p><span> 79:
end-volume</span></p>
<p><span> 80:
</span></p>
<p><span> 81: volume
meta-autoload</span></p>
<p><span> 82: type
meta</span></p>
<p><span> 83:
subvolumes
vmstore</span></p>
<p><span> 84:
end-volume</span></p>
<p><span> 85:
</span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span>[2015-09-22
05:29:07.586205] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:29:07.586325] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:29:07.586480] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:29:07.595052] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:29:07.595397] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:29:07.595576] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:29:07.595721] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0:
Connected to
vmstore-client-0,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:29:07.595738] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0:
Server and Client
lk-version numbers
are not same,
reopening the
fds</span></p>
<p><span>[2015-09-22
05:29:07.596044] I
[MSGID: 108005]
[afr-common.c:3998:afr_notify]
0-vmstore-replicate-0:
Subvolume
'vmstore-client-0'
came back up; going
online.</span></p>
<p><span>[2015-09-22
05:29:07.596170] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1:
Connected to
vmstore-client-1,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the
fds</span></p>
<p><span>
</span></p>
<p><span>[2015-09-22
05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume :</span></p>
<p><span>[2015-09-22
05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the
fds</span></p>
<p><span>[2015-09-22
05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:29:07.596506] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2:
Server and Client
lk-version numbers
are not same,
reopening the
fds</span></p>
<p><span>[2015-09-22
05:29:07.608758] I
[fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to
graph 0</span></p>
<p><span>[2015-09-22
05:29:07.608910] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:29:07.608936] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:29:07.608950] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:29:07.609695] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index
2</span></p>
<p><span>[2015-09-22
05:29:07.609868] I
[fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse:
FUSE inited with
protocol versions:
glusterfs 7.22
kernel 7.22</span></p>
<p><span>[2015-09-22
05:29:07.616577] I
[MSGID: 109063]
[dht-layout.c:702:dht_layout_normalize]
0-vmstore-dht: Found
anomalies in / (gfid
=
00000000-0000-0000-0000-000000000001).
Holes=1
overlaps=0</span></p>
<p><span>[2015-09-22
05:29:07.620230] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht:
Setting layout of /
with [Subvol_name:
vmstore-replicate-0,
Err: -1 , Start: 0 ,
Stop: 4294967295 ,
Hash: 1 ], </span></p>
<p><span>[2015-09-22
05:29:08.122415] W
[fuse-bridge.c:1230:fuse_err_cbk]
0-glusterfs-fuse:
26: REMOVEXATTR()
/__DIRECT_IO_TEST__
=> -1 (No data
available)</span></p>
<p><span>[2015-09-22
05:29:08.<a
moz-do-not-send="true"
href="tel:137359"
value="+61137359"
target="_blank">137359</a>]
I [MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht:
Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2
with [Subvol_name:
vmstore-replicate-0,
Err: -1 , Start: 0 ,
Stop: 4294967295 ,
Hash: 1 ], </span></p>
<p><span>[2015-09-22
05:29:08.145835] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht:
Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md
with [Subvol_name:
vmstore-replicate-0,
Err: -1 , Start: 0 ,
Stop: 4294967295 ,
Hash: 1 ], </span></p>
<p><span>[2015-09-22
05:30:57.897819] I
[MSGID: 100030]
[glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs:
Started running
/usr/sbin/glusterfs
version 3.7.4 (args:
/usr/sbin/glusterfs
--volfile-server=sjcvhost02
--volfile-server=sjcstorage01
--volfile-server=sjcstorage02
--volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p><span>[2015-09-22
05:30:57.909889] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index
1</span></p>
<p><span>[2015-09-22
05:30:57.923087] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-0:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:30:57.925701] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-1:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:30:57.927984] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-2:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>Final
graph:</span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span> 1: volume
vmstore-client-0</span></p>
<p><span> 2: type
protocol/client</span></p>
<p><span> 3: option
ping-timeout
42</span></p>
<p><span> 4: option
remote-host
sjcstorage01</span></p>
<p><span> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 6: option
transport-type
socket</span></p>
<p><span> 7: option
send-gids true</span></p>
<p><span> 8:
end-volume</span></p>
<p><span> 9:
</span></p>
<p><span> 10: volume
vmstore-client-1</span></p>
<p><span> 11: type
protocol/client</span></p>
<p><span> 12: option
ping-timeout
42</span></p>
<p><span> 13: option
remote-host
sjcstorage02</span></p>
<p><span>
</span></p>
<p><span> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 15: option
transport-type
socket</span></p>
<p><span> 16: option
send-gids true</span></p>
<p><span> 17:
end-volume</span></p>
<p><span> 18:
</span></p>
<p><span> 19: volume
vmstore-client-2</span></p>
<p><span> 20: type
protocol/client</span></p>
<p><span> 21: option
ping-timeout
42</span></p>
<p><span> 22: option
remote-host
sjcvhost02</span></p>
<p><span> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 24: option
transport-type
socket</span></p>
<p><span> 25: option
send-gids true</span></p>
<p><span> 26:
end-volume</span></p>
<p><span> 27:
</span></p>
<p><span> 28: volume
vmstore-replicate-0</span></p>
<p><span> 29: type
cluster/replicate</span></p>
<p><span> 30: option
arbiter-count
1</span></p>
<p><span> 31:
subvolumes
vmstore-client-0
vmstore-client-1
vmstore-client-2</span></p>
<p><span> 32:
end-volume</span></p>
<p><span> 33:
</span></p>
<p><span> 34: volume
vmstore-dht</span></p>
<p><span> 35: type
cluster/distribute</span></p>
<p><span> 36:
subvolumes
vmstore-replicate-0</span></p>
<p><span> 37:
end-volume</span></p>
<p><span> 38:
</span></p>
<p><span> 39: volume
vmstore-write-behind</span></p>
<p><span> 40: type
performance/write-behind</span></p>
<p><span> 41:
subvolumes
vmstore-dht</span></p>
<p><span> 42:
end-volume</span></p>
<p><span> 43:
</span></p>
<p><span> 44: volume
vmstore-read-ahead</span></p>
<p><span> 45: type
performance/read-ahead</span></p>
<p><span> 46:
subvolumes
vmstore-write-behind</span></p>
<p><span> 47:
end-volume</span></p>
<p><span> 48:
</span></p>
<p><span> 49: volume
vmstore-readdir-ahead</span></p>
<p><span> 50: type
performance/readdir-ahead</span></p>
<p><span> 51:
subvolumes
vmstore-read-ahead</span></p>
<p><span> 52:
end-volume</span></p>
<p><span> 53:
</span></p>
<p><span> 54: volume
vmstore-io-cache</span></p>
<p><span> 55: type
performance/io-cache</span></p>
<p><span> 56:
subvolumes
vmstore-readdir-ahead</span></p>
<p><span> 57:
end-volume</span></p>
<p><span> 58:
</span></p>
<p><span> 59: volume
vmstore-quick-read</span></p>
<p><span> 60: type
performance/quick-read</span></p>
<p><span> 61:
subvolumes
vmstore-io-cache</span></p>
<p><span> 62:
end-volume</span></p>
<p><span> 63:
</span></p>
<p><span> 64: volume
vmstore-open-behind</span></p>
<p><span> 65: type
performance/open-behind</span></p>
<p><span> 66:
subvolumes
vmstore-quick-read</span></p>
<p><span> 67:
end-volume</span></p>
<p><span> 68:
</span></p>
<p><span> 69: volume
vmstore-md-cache</span></p>
<p><span>
</span></p>
<p><span> 70: type
performance/md-cache</span></p>
<p><span> 71:
subvolumes
vmstore-open-behind</span></p>
<p><span> 72:
end-volume</span></p>
<p><span> 73:
</span></p>
<p><span> 74: volume
vmstore</span></p>
<p><span> 75: type
debug/io-stats</span></p>
<p><span> 76: option
latency-measurement
off</span></p>
<p><span> 77: option
count-fop-hits
off</span></p>
<p><span> 78:
subvolumes
vmstore-md-cache</span></p>
<p><span> 79:
end-volume</span></p>
<p><span> 80:
</span></p>
<p><span> 81: volume
meta-autoload</span></p>
<p><span> 82: type
meta</span></p>
<p><span> 83:
subvolumes
vmstore</span></p>
<p><span> 84:
end-volume</span></p>
<p><span> 85:
</span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span>[2015-09-22
05:30:57.934021] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:30:57.934145] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:30:57.934491] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:30:57.942198] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:30:57.942545] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:30:57.942659] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:30:57.942797] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0:
Connected to
vmstore-client-0,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:30:57.942808] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0:
Server and Client
lk-version numbers
are not same,
reopening the
fds</span></p>
<p><span>[2015-09-22
05:30:57.943036] I
[MSGID: 108005]
[afr-common.c:3998:afr_notify]
0-vmstore-replicate-0:
Subvolume
'vmstore-client-0'
came back up; going
online.</span></p>
<p><span>[2015-09-22
05:30:57.943078] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1:
Connected to
vmstore-client-1,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:30:57.943086] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the
fds</span></p>
<p><span>[2015-09-22
05:30:57.943292] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:30:57.943302] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2:
Server and Client
lk-version numbers
are not same,
reopening the
fds</span></p>
<p><span>[2015-09-22
05:30:57.953887] I
[fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to
graph 0</span></p>
<p><span>[2015-09-22
05:30:57.954071] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:30:57.954105] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:30:57.954124] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:30:57.955282] I
[fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse:
FUSE inited with
protocol versions:
glusterfs 7.22
kernel 7.22</span></p>
<p><span>[2015-09-22
05:30:57.955738] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index
2</span></p>
<p><span>[2015-09-22
05:30:57.970232] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p><span>[2015-09-22
05:30:57.970834] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5)
[0x7f187139fdf5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f1872a09785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f1872a09609] )
0-: received signum
(15), shutting
down</span></p>
<p><span>[2015-09-22
05:30:57.970848] I
[fuse-bridge.c:5595:fini]
0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p><span>[2015-09-22
05:30:58.420973] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p><span>[2015-09-22
05:30:58.421355] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5)
[0x7f8267cd4df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f826933e785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f826933e609] )
0-: received signum
(15), shutting
down</span></p>
<p><span>[2015-09-22
05:30:58.421369] I
[fuse-bridge.c:5595:fini]
0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p><span>[2015-09-22
05:31:09.534410] I
[MSGID: 100030]
[glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs:
Started running
/usr/sbin/glusterfs
version 3.7.4 (args:
/usr/sbin/glusterfs
--volfile-server=sjcvhost02
--volfile-server=sjcstorage01
--volfile-server=sjcstorage02
--volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p><span>[2015-09-22
05:31:09.545686] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index
1</span></p>
<p><span>[2015-09-22
05:31:09.553019] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-0:
parent translators
are ready,
attempting connect
on transport</span></p>
<p> </p>
<p><span>[2015-09-22
05:31:09.555552] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-1:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:31:09.557989] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-2:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>Final
graph:</span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span> 1: volume
vmstore-client-0</span></p>
<p><span> 2: type
protocol/client</span></p>
<p><span> 3: option
ping-timeout
42</span></p>
<p><span> 4: option
remote-host
sjcstorage01</span></p>
<p><span> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 6: option
transport-type
socket</span></p>
<p><span> 7: option
send-gids true</span></p>
<p><span> 8:
end-volume</span></p>
<p><span> 9:
</span></p>
<p><span> 10: volume
vmstore-client-1</span></p>
<p><span> 11: type
protocol/client</span></p>
<p><span> 12: option
ping-timeout
42</span></p>
<p><span> 13: option
remote-host
sjcstorage02</span></p>
<p><span> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 15: option
transport-type
socket</span></p>
<p><span> 16: option
send-gids true</span></p>
<p><span> 17:
end-volume</span></p>
<p><span> 18:
</span></p>
<p><span> 19: volume
vmstore-client-2</span></p>
<p><span> 20: type
protocol/client</span></p>
<p><span> 21: option
ping-timeout
42</span></p>
<p><span> 22: option
remote-host
sjcvhost02</span></p>
<p><span> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 24: option
transport-type
socket</span></p>
<p><span> 25: option
send-gids true</span></p>
<p><span> 26:
end-volume</span></p>
<p><span> 27:
</span></p>
<p><span> 28: volume
vmstore-replicate-0</span></p>
<p><span> 29: type
cluster/replicate</span></p>
<p><span> 30: option
arbiter-count
1</span></p>
<p><span> 31:
subvolumes
vmstore-client-0
vmstore-client-1
vmstore-client-2</span></p>
<p><span> 32:
end-volume</span></p>
<p><span> 33:
</span></p>
<p><span> 34: volume
vmstore-dht</span></p>
<p><span> 35: type
cluster/distribute</span></p>
<p><span> 36:
subvolumes
vmstore-replicate-0</span></p>
<p><span> 37:
end-volume</span></p>
<p><span> 38:
</span></p>
<p><span> 39: volume
vmstore-write-behind</span></p>
<p><span> 40: type
performance/write-behind</span></p>
<p><span> 41:
subvolumes
vmstore-dht</span></p>
<p><span> 42:
end-volume</span></p>
<p><span> 43:
</span></p>
<p><span> 44: volume
vmstore-read-ahead</span></p>
<p><span> 45: type
performance/read-ahead</span></p>
<p><span> 46:
subvolumes
vmstore-write-behind</span></p>
<p><span> 47:
end-volume</span></p>
<p><span> 48:
</span></p>
<p><span> 49: volume
vmstore-readdir-ahead</span></p>
<p><span> 50: type
performance/readdir-ahead</span></p>
<p><span> 51:
subvolumes
vmstore-read-ahead</span></p>
<p> </p>
<p><span> 52:
end-volume</span></p>
<p><span> 53:
</span></p>
<p><span> 54: volume
vmstore-io-cache</span></p>
<p><span> 55: type
performance/io-cache</span></p>
<p><span> 56:
subvolumes
vmstore-readdir-ahead</span></p>
<p><span> 57:
end-volume</span></p>
<p><span> 58:
</span></p>
<p><span> 59: volume
vmstore-quick-read</span></p>
<p><span> 60: type
performance/quick-read</span></p>
<p><span> 61:
subvolumes
vmstore-io-cache</span></p>
<p><span> 62:
end-volume</span></p>
<p><span> 63:
</span></p>
<p><span> 64: volume
vmstore-open-behind</span></p>
<p><span> 65: type
performance/open-behind</span></p>
<p><span> 66:
subvolumes
vmstore-quick-read</span></p>
<p><span> 67:
end-volume</span></p>
<p><span> 68:
</span></p>
<p><span> 69: volume
vmstore-md-cache</span></p>
<p><span> 70: type
performance/md-cache</span></p>
<p><span> 71:
subvolumes
vmstore-open-behind</span></p>
<p><span> 72:
end-volume</span></p>
<p><span> 73:
</span></p>
<p><span> 74: volume
vmstore</span></p>
<p><span> 75: type
debug/io-stats</span></p>
<p><span> 76: option
latency-measurement
off</span></p>
<p><span> 77: option
count-fop-hits
off</span></p>
<p><span> 78:
subvolumes
vmstore-md-cache</span></p>
<p><span> 79:
end-volume</span></p>
<p><span> 80:
</span></p>
<p><span> 81: volume
meta-autoload</span></p>
<p><span> 82: type
meta</span></p>
<p><span> 83:
subvolumes
vmstore</span></p>
<p><span> 84:
end-volume</span></p>
<p><span> 85:
</span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span>[2015-09-22
05:31:09.563262] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:31:09.563431] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:31:09.563877] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:31:09.572443] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:31:09.572599] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:31:09.572742] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:31:09.573165] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1:
Connected to
vmstore-client-1,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:31:09.573186] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the
fds</span></p>
<p><span>[2015-09-22
05:31:09.573395] I
[MSGID: 108005]
[afr-common.c:3998:afr_notify]
0-vmstore-replicate-0:
Subvolume
'vmstore-client-1'
came back up; going
online.</span></p>
<p><span>[2015-09-22
05:31:09.573427] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0:
Connected to
vmstore-client-0,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:31:09.573435] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0:
Server and Client
lk-version numbers
are not same,
reopening the
fds</span></p>
<p><span>[2015-09-22
05:31:09.573754] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p> </p>
<p><span>[2015-09-22
05:31:09.573783] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2:
Server and Client
lk-version numbers
are not same,
reopen:</span></p>
<p><span>[2015-09-22
05:31:09.577192] I
[fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to
graph 0</span></p>
<p><span>[2015-09-22
05:31:09.577302] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:31:09.577325] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:31:09.577339] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:31:09.578125] I
[fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse:
FUSE inited with
protocol versions:
glusterfs 7.22
kernel 7.22</span></p>
<p><span>[2015-09-22
05:31:09.578636] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index
2</span></p>
<p><span>[2015-09-22
05:31:10.073698] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p><span>[2015-09-22
05:31:10.073977] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5)
[0x7f6b9ba88df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f6b9d0f2785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f6b9d0f2609] )
0-: received signum
(15), shutting
down</span></p>
<p><span>[2015-09-22
05:31:10.073993] I
[fuse-bridge.c:5595:fini]
0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p><span>[2015-09-22
05:31:20.184700] I
[MSGID: 100030]
[glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs:
Started running
/usr/sbin/glusterfs
version 3.7.4 (args:
/usr/sbin/glusterfs
--volfile-server=sjcvhost02
--volfile-server=sjcstorage01
--volfile-server=sjcstorage02
--volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p><span>[2015-09-22
05:31:20.194928] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index
1</span></p>
<p><span>[2015-09-22
05:31:20.200701] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-0:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:31:20.203110] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-1:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:31:20.205708] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-2:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>
</span></p>
<p><span>Final
graph:</span></p>
<p><span><br>
</span></p>
<p><span><br>
</span></p>
<p><span>Hope this
helps. </span></p>
<p><span><br>
</span></p>
<p>thanks again</p>
<p><br>
</p>
<p>Brett Stevens</p>
<p><br>
</p>
</div>
</div>
<div
class="gmail_extra"><br>
<div class="gmail_quote">On
Tue, Sep 22, 2015 at
10:14 PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com"><a
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com">sabose@redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"><span>
<br>
<br>
<div>On 09/22/2015
02:17 PM, Brett
Stevens wrote:<br>
</div>
<blockquote
type="cite">
<div
dir="ltr">Hi.
First time on
the lists.
I've searched
for this but
no luck so
sorry if this
has been
covered
before.
<div><br>
</div>
<div>Im
working with
the latest 3.6
beta with the
following
infrastructure. </div>
<div><br>
</div>
<div>1
management
host (to be
used for a
number of
tasks so chose
not to use
self hosted,
we are a
school and
will need to
keep an eye on
hardware
costs)</div>
<div>2 compute
nodes</div>
<div>2 gluster
nodes</div>
<div><br>
</div>
<div>so far
built one
gluster volume
using the
gluster cli to
give me 2
nodes and one
arbiter node
(management
host)</div>
<div><br>
</div>
<div>so far,
every time I
create a
volume, it
shows up
strait away on
the ovirt gui.
however no
matter what I
try, I cannot
create or
import it as a
data domain. </div>
<div><br>
</div>
<div>the
current error
in the ovirt
gui is "Error
while
executing
action
AddGlusterFsStorageDomain:
Error creating
a storage
domain's
metadata"</div>
</div>
</blockquote>
<br>
</span> Please
provide vdsm and
gluster logs<span><br>
<br>
<blockquote
type="cite">
<div dir="ltr">
<div><br>
</div>
<div>logs,
continuously
rolling the
following
errors around</div>
<div>
<p><span>Scheduler_Worker-53)
[] START,
GlusterVolumesListVDSCommand(HostName
=
sjcstorage02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id:
24198fbf</span></p>
<p><span>2015-09-22
03:57:29,903
WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53)
[] Could not
associate
brick
'sjcstorage01:/export/vmstore/brick01'
of volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
with correct
network as no
gluster
network found
in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
</div>
</div>
</blockquote>
<br>
</span> What is the
hostname provided in
ovirt engine for
<span>sjcstorage01
? Does this host
have multiple
nics?<br>
<br>
Could you provide
output of gluster
volume info?<br>
Please note, that
these errors are
not related to
error in creating
storage domain.
However, these
errors could
prevent you from
monitoring the
state of gluster
volume from oVirt<br>
<br>
</span>
<blockquote
type="cite"><span>
<div dir="ltr">
<div>
<p><span>2015-09-22
03:57:29,905
WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53)
[] Could not
associate
brick
'sjcstorage02:/export/vmstore/brick01'
of volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
with correct
network as no
gluster
network found
in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
03:57:29,905
WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53)
[] Could not
add brick
'sjcvhost02:/export/vmstore/brick01'
to volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
- server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
not found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
03:57:29,905
INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-53)
[] FINISH,
GlusterVolumesListVDSCommand,
return:
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
log id:
24198fbf</span></p>
<p><span><br>
</span></p>
<p><span>I'm
new to ovirt
and gluster,
so any help
would be
great</span></p>
<p><span><br>
</span></p>
<p><span>thanks</span></p>
<p><span><br>
</span></p>
<p><span>Brett
Stevens</span></p>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</span><span>
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org"
target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true"
href="http://lists.ovirt.org/mailman/listinfo/users"
target="_blank">http://lists.ovirt.org/mailman/listinfo/user...
</pre>
</span></blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
</div>
<br>
</div>
<br>
</div>
<br>
</blockquote>
<br>
</body>
</html>
--------------070407060509090702050906--