Cannot mount gluster storage data
by Jean-Michel FRANCOIS
This is a multi-part message in MIME format.
--------------090400050908040206050504
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi Ovirt users,
I'm running ovirt hosted 3.4 with gluster data storage.
When I add a new host (Centos 6.6) the data storage (as a glsuterfs)
cannot be mount.
I have the following errors in gluster client log file :
[2015-09-24 12:27:22.636221] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] 0-glusterfs:
readv on 172.16.0.5:24007 failed (No data available)
[2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee]
(-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3bb]
(--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2]
))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake)
op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=0x1)
[2015-09-24 12:27:22.637333] E [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk]
0-mgmt: failed to fetch volume file (key:/data)
[2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) [0x7f427f8fc1fe]
-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) [0x40d5d2]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (0), shutting down
[2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'.
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (15), shutting down
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received
signum (15), shutting down
And nothing server side.
I suppose it is a version issue since on server side I have
glusterfs-api-3.6.3-1.el6.x86_64
glusterfs-fuse-3.6.3-1.el6.x86_64
glusterfs-libs-3.6.3-1.el6.x86_64
glusterfs-3.6.3-1.el6.x86_64
glusterfs-cli-3.6.3-1.el6.x86_64
glusterfs-rdma-3.6.3-1.el6.x86_64
glusterfs-server-3.6.3-1.el6.x86_64
and on the new host :
glusterfs-3.7.4-2.el6.x86_64
glusterfs-api-3.7.4-2.el6.x86_64
glusterfs-libs-3.7.4-2.el6.x86_64
glusterfs-fuse-3.7.4-2.el6.x86_64
glusterfs-cli-3.7.4-2.el6.x86_64
glusterfs-server-3.7.4-2.el6.x86_64
glusterfs-client-xlators-3.7.4-2.el6.x86_64
glusterfs-rdma-3.7.4-2.el6.x86_64
But since it is a production system, i'm not confident about performing
gluster server upgrade.
Mounting a gluster volume as NFS is possible (the engine data storage
has been mounted succesfully).
I'm asking here because glusterfs comes from the ovirt3.4 rpm repository.
If anyone have a hint to this problem
thanks
Jean-Michel
--------------090400050908040206050504
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-text-flowed" style="font-family: -moz-fixed;
font-size: 12px;" lang="x-unicode">Hi Ovirt users, <br>
<br>
I'm running ovirt hosted 3.4 with gluster data storage. <br>
When I add a new host (Centos 6.6) the data storage (as a
glsuterfs) cannot be mount. <br>
I have the following errors in gluster client log file : <br>
[2015-09-24 12:27:22.636221] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1 <br>
[2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv]
0-glusterfs: readv on 172.16.0.5:24007 failed (No data available)
<br>
[2015-09-24 12:27:22.637307] E
[rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b]
(-->
/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7]
(-->
/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee]
(-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3bb]
(-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2]
))))) 0-glusterfs: forced unwinding frame type(GlusterFS
Handshake) op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344
(xid=0x1) <br>
[2015-09-24 12:27:22.637333] E
[glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch
volume file (key:/data) <br>
[2015-09-24 12:27:22.637360] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e)
[0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2)
[0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65)
[0x4059b5] ) 0-: received signum (0), shutting down <br>
[2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse:
Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'. <br>
[2015-09-24 12:27:22.646246] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-:
received signum (15), shutting down <br>
[2015-09-24 12:27:22.646246] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-:
received signum (15), shutting down <br>
And nothing server side. <br>
<br>
I suppose it is a version issue since on server side I have <br>
glusterfs-api-3.6.3-1.el6.x86_64 <br>
glusterfs-fuse-3.6.3-1.el6.x86_64 <br>
glusterfs-libs-3.6.3-1.el6.x86_64 <br>
glusterfs-3.6.3-1.el6.x86_64 <br>
glusterfs-cli-3.6.3-1.el6.x86_64 <br>
glusterfs-rdma-3.6.3-1.el6.x86_64 <br>
glusterfs-server-3.6.3-1.el6.x86_64 <br>
<br>
and on the new host : <br>
glusterfs-3.7.4-2.el6.x86_64 <br>
glusterfs-api-3.7.4-2.el6.x86_64 <br>
glusterfs-libs-3.7.4-2.el6.x86_64 <br>
glusterfs-fuse-3.7.4-2.el6.x86_64 <br>
glusterfs-cli-3.7.4-2.el6.x86_64 <br>
glusterfs-server-3.7.4-2.el6.x86_64 <br>
glusterfs-client-xlators-3.7.4-2.el6.x86_64 <br>
glusterfs-rdma-3.7.4-2.el6.x86_64 <br>
<br>
But since it is a production system, i'm not confident about
performing gluster server upgrade. <br>
Mounting a gluster volume as NFS is possible (the engine data
storage has been mounted succesfully). <br>
<br>
I'm asking here because glusterfs comes from the ovirt3.4 rpm
repository. <br>
<br>
If anyone have a hint to this problem <br>
<br>
thanks <br>
Jean-Michel <br>
<br>
</div>
</body>
</html>
--------------090400050908040206050504--
9 years, 1 month
Re: [ovirt-users] Fwd: Re: adding gluster domains
by Ravishankar N
This is a multi-part message in MIME format.
--------------070407060509090702050906
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi Brett,
Can you truncate the gluster brick and mount logs on all three nodes,
try creating the storage domain again and then share these logs along
with the VDSM logs?
i.e. on all 3 nodes,
1. echo >
/var/log/glusterfs/rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log
2. echo > export-vmstore-brick01.log
3. Create the storage domain (at which point VDSM supposedly fails with
the truncate error)
4. Share the logs.
Also, what timezone are you in? That would be needed to co-relate the
timestamps in the vdsm log (local time) and gluster log (UTC)
Thanks!
Ravi
>
> -------- Forwarded Message --------
> Subject: Re: [ovirt-users] adding gluster domains
> Date: Tue, 29 Sep 2015 08:38:49 +1000
> From: Brett Stevens <gorttman(a)i3sec.com.au>
> Reply-To: brett(a)i3sec.com.au
> To: Sahina Bose <sabose(a)redhat.com>
>
>
>
> Sorry about the delay, I've run the truncate. I'm not sure what
> results you were expecting, but it executed fine, no delays no errors
> no problems.
>
> thanks
> Brett Stevens
>
> On Thu, Sep 24, 2015 at 7:29 PM, Brett Stevens <gorttman(a)i3sec.com.au
> <mailto:gorttman@i3sec.com.au>> wrote:
>
> Thanks I'll do that tomorrow morning.
>
> Just out of interest, I keep getting warn errors in the engine.log
> allong the lines of node not present (sjcvhost02 which is the
> arbiter) and no gluster network present even after I have added
> the gluster network option in the network management gui.
>
> thanks
>
> Brett Stevens
>
>
> On Thu, Sep 24, 2015 at 7:26 PM, Sahina Bose <sabose(a)redhat.com>
> wrote:
>
> Sorry, I intended to forward it to a gluster devel.
>
> Btw, there were no errors in the mount log - so unable to root
> cause why truncate of file failed with IO error. Was the log
> from vhost03 -
> /var/log/glusterfs/rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log
> ?
>
> We will look into the logs you attached to see if there are
> any errors reported at the bricks. (But there should have been
> some error in mount log!)
>
> Could you also try "truncate -s 10M test" from the mount point
> ( manually mount gluster using - #mount -t glusterfs
> sjcstorage01:/vmstore <mountpoint>) and report results.
>
> On 09/24/2015 02:32 PM, Brett Stevens wrote:
>> Hi Sahina.
>>
>> Something has gone wrong with your last email. I have
>> received a message from you, but did not get any text to go
>> with it. could you resend please?
>>
>> thanks
>>
>>
>> On Thu, Sep 24, 2015 at 6:48 PM, Sahina Bose
>> <sabose(a)redhat.com> wrote:
>>
>>
>>
>> On 09/24/2015 04:21 AM, Brett Stevens wrote:
>>> Hi Sahina.
>>>
>>> vhost02 is the engine node vhost03 is the hypervisor
>>> storage01 and 02 the gluster nodes. I've put arbiter on
>>> vhost02
>>>
>>> all tasks are separated (except engine and arbiter)
>>>
>>> thanks
>>>
>>>
>>> On Wed, Sep 23, 2015 at 9:48 PM, Sahina Bose
>>> <sabose(a)redhat.com> wrote:
>>>
>>> + ovirt-users
>>>
>>> Some clarity on your setup -
>>> sjcvhost03 - is this your arbiter node and ovirt
>>> management node? And are you running a compute +
>>> storage on the same nodes - i.e, sjcstorage01,
>>> sjcstorage02, sjcvhost03 (arbiter).
>>>
>>>
>>> CreateStorageDomainVDSCommand(HostName = sjcvhost03,
>>> CreateStorageDomainVDSCommandParameters:{runAsync='true',
>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>> storageDomain='StorageDomainStatic:{name='sjcvmstore',
>>> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
>>> args='sjcstorage01:/vmstore'}), log id: b9fe587
>>>
>>> - fails with Error creating a storage domain's
>>> metadata: ("create meta file 'outbox' failed: [Errno
>>> 5] Input/output error",
>>>
>>> Are the vdsm logs you provided from sjcvhost03?
>>> There are no errors to be seen in the gluster log
>>> you provided. Could you provide mount log from
>>> sjcvhost03 (at
>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log
>>> most likely)
>>> If possible, /var/log/glusterfs/* from the 3 storage
>>> nodes.
>>>
>>> thanks
>>> sahina
>>>
>>> On 09/23/2015 05:02 AM, Brett Stevens wrote:
>>>> Hi Sahina,
>>>>
>>>> as requested here is some logs taken during a
>>>> domain create.
>>>>
>>>> 2015-09-22 18:46:44,320 INFO
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>> (DefaultQuartzScheduler_Worker-88) [] START,
>>>> GlusterVolumesListVDSCommand(HostName =
>>>> sjcstorage01,
>>>> GlusterVolumesListVDSParameters:{runAsync='true',
>>>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
>>>> log id: 2205ff1
>>>>
>>>> 2015-09-22 18:46:44,413 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-88) [] Could not
>>>> associate brick
>>>> 'sjcstorage01:/export/vmstore/brick01' of volume
>>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>>> network as no gluster network found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:44,417 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-88) [] Could not
>>>> associate brick
>>>> 'sjcstorage02:/export/vmstore/brick01' of volume
>>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>>> network as no gluster network found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:44,417 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-88) [] Could not add
>>>> brick 'sjcvhost02:/export/vmstore/brick01' to
>>>> volume '030f270a-0999-4df4-9b14-ae56eb0a2fb9' -
>>>> server uuid '29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
>>>> not found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:44,418 INFO
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>> (DefaultQuartzScheduler_Worker-88) [] FINISH,
>>>> GlusterVolumesListVDSCommand, return:
>>>> {030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
>>>> log id: 2205ff1
>>>>
>>>> 2015-09-22 18:46:45,215 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>>>> (default task-24) [5099cda3] Lock Acquired to
>>>> object
>>>> 'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>>> sharedLocks='null'}'
>>>>
>>>> 2015-09-22 18:46:45,230 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>>>> (default task-24) [5099cda3] Running command:
>>>> AddStorageServerConnectionCommand internal: false.
>>>> Entities affected : ID:
>>>> aaa00000-0000-0000-0000-123456789aaa Type:
>>>> SystemAction group CREATE_STORAGE_DOMAIN with role
>>>> type ADMIN
>>>>
>>>> 2015-09-22 18:46:45,233 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>>> (default task-24) [5099cda3] START,
>>>> ConnectStorageServerVDSCommand(HostName =
>>>> sjcvhost03,
>>>> StorageServerConnectionManagementVDSParameters:{runAsync='true',
>>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>> storagePoolId='00000000-0000-0000-0000-000000000000',
>>>> storageType='GLUSTERFS',
>>>> connectionList='[StorageServerConnections:{id='null',
>>>> connection='sjcstorage01:/vmstore', iqn='null',
>>>> vfsType='glusterfs', mountOptions='null',
>>>> nfsVersion='null', nfsRetrans='null',
>>>> nfsTimeo='null', iface='null',
>>>> netIfaceName='null'}]'}), log id: 6a112292
>>>>
>>>> 2015-09-22 18:46:48,065 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>>> (default task-24) [5099cda3] FINISH,
>>>> ConnectStorageServerVDSCommand, return:
>>>> {00000000-0000-0000-0000-000000000000=0}, log id:
>>>> 6a112292
>>>>
>>>> 2015-09-22 18:46:48,073 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>>>> (default task-24) [5099cda3] Lock freed to object
>>>> 'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>>> sharedLocks='null'}'
>>>>
>>>> 2015-09-22 18:46:48,188 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>>> (default task-23) [6410419] Running command:
>>>> AddGlusterFsStorageDomainCommand internal: false.
>>>> Entities affected : ID:
>>>> aaa00000-0000-0000-0000-123456789aaa Type:
>>>> SystemAction group CREATE_STORAGE_DOMAIN with role
>>>> type ADMIN
>>>>
>>>> 2015-09-22 18:46:48,206 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>>> (default task-23) [6410419] START,
>>>> ConnectStorageServerVDSCommand(HostName =
>>>> sjcvhost03,
>>>> StorageServerConnectionManagementVDSParameters:{runAsync='true',
>>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>> storagePoolId='00000000-0000-0000-0000-000000000000',
>>>> storageType='GLUSTERFS',
>>>> connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
>>>> connection='sjcstorage01:/vmstore', iqn='null',
>>>> vfsType='glusterfs', mountOptions='null',
>>>> nfsVersion='null', nfsRetrans='null',
>>>> nfsTimeo='null', iface='null',
>>>> netIfaceName='null'}]'}), log id: 38a2b0d
>>>>
>>>> 2015-09-22 18:46:48,219 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>>> (default task-23) [6410419] FINISH,
>>>> ConnectStorageServerVDSCommand, return:
>>>> {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id:
>>>> 38a2b0d
>>>>
>>>> 2015-09-22 18:46:48,221 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>>> (default task-23) [6410419] START,
>>>> CreateStorageDomainVDSCommand(HostName =
>>>> sjcvhost03,
>>>> CreateStorageDomainVDSCommandParameters:{runAsync='true',
>>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>> storageDomain='StorageDomainStatic:{name='sjcvmstore',
>>>> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
>>>> args='sjcstorage01:/vmstore'}), log id: b9fe587
>>>>
>>>> 2015-09-22 18:46:48,744 ERROR
>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>> (default task-23) [6410419] Correlation ID: null,
>>>> Call Stack: null, Custom Event ID: -1, Message:
>>>> VDSM sjcvhost03 command failed: Error creating a
>>>> storage domain's metadata: ("create meta file
>>>> 'outbox' failed: [Errno 5] Input/output error",)
>>>>
>>>> 2015-09-22 18:46:48,744 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>>> (default task-23) [6410419] Command
>>>> 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
>>>> return value 'StatusOnlyReturnForXmlRpc
>>>> [status=StatusForXmlRpc [code=362, message=Error
>>>> creating a storage domain's metadata: ("create meta
>>>> file 'outbox' failed: [Errno 5] Input/output
>>>> error",)]]'
>>>>
>>>> 2015-09-22 18:46:48,744 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>>> (default task-23) [6410419] HostName = sjcvhost03
>>>>
>>>> 2015-09-22 18:46:48,745 ERROR
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>>> (default task-23) [6410419] Command
>>>> 'CreateStorageDomainVDSCommand(HostName =
>>>> sjcvhost03,
>>>> CreateStorageDomainVDSCommandParameters:{runAsync='true',
>>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>> storageDomain='StorageDomainStatic:{name='sjcvmstore',
>>>> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
>>>> args='sjcstorage01:/vmstore'})' execution failed:
>>>> VDSGenericException: VDSErrorException: Failed in
>>>> vdscommand to CreateStorageDomainVDS, error = Error
>>>> creating a storage domain's metadata: ("create meta
>>>> file 'outbox' failed: [Errno 5] Input/output error",)
>>>>
>>>> 2015-09-22 18:46:48,745 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>>>> (default task-23) [6410419] FINISH,
>>>> CreateStorageDomainVDSCommand, log id: b9fe587
>>>>
>>>> 2015-09-22 18:46:48,745 ERROR
>>>> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>>> (default task-23) [6410419] Command
>>>> 'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
>>>> failed: EngineException:
>>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
>>>> VDSGenericException: VDSErrorException: Failed in
>>>> vdscommand to CreateStorageDomainVDS, error = Error
>>>> creating a storage domain's metadata: ("create meta
>>>> file 'outbox' failed: [Errno 5] Input/output
>>>> error",) (Failed with error
>>>> StorageDomainMetadataCreationError and code 362)
>>>>
>>>> 2015-09-22 18:46:48,755 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>>> (default task-23) [6410419] Command
>>>> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
>>>> Compensating NEW_ENTITY_ID of
>>>> org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
>>>> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>>>>
>>>> 2015-09-22 18:46:48,758 INFO
>>>> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>>> (default task-23) [6410419] Command
>>>> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
>>>> Compensating NEW_ENTITY_ID of
>>>> org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
>>>> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>>>>
>>>> 2015-09-22 18:46:48,769 ERROR
>>>> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
>>>> (default task-23) [6410419] Transaction rolled-back
>>>> for command
>>>> 'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.
>>>>
>>>> 2015-09-22 18:46:48,784 ERROR
>>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>>> (default task-23) [6410419] Correlation ID:
>>>> 6410419, Job ID:
>>>> 78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack:
>>>> null, Custom Event ID: -1, Message: Failed to add
>>>> Storage Domain sjcvmstore. (User: admin@internal)
>>>>
>>>> 2015-09-22 18:46:48,996 INFO
>>>> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>>> (default task-32) [1635a244] Lock Acquired to
>>>> object
>>>> 'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
>>>> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>>> sharedLocks='null'}'
>>>>
>>>> 2015-09-22 18:46:49,018 INFO
>>>> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>>> (default task-32) [1635a244] Running command:
>>>> RemoveStorageServerConnectionCommand internal:
>>>> false. Entities affected : ID:
>>>> aaa00000-0000-0000-0000-123456789aaa Type:
>>>> SystemAction group CREATE_STORAGE_DOMAIN with role
>>>> type ADMIN
>>>>
>>>> 2015-09-22 18:46:49,024 INFO
>>>> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>>> (default task-32) [1635a244] Removing connection
>>>> 'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from database
>>>>
>>>> 2015-09-22 18:46:49,026 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
>>>> (default task-32) [1635a244] START,
>>>> DisconnectStorageServerVDSCommand(HostName =
>>>> sjcvhost03,
>>>> StorageServerConnectionManagementVDSParameters:{runAsync='true',
>>>> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
>>>> storagePoolId='00000000-0000-0000-0000-000000000000',
>>>> storageType='GLUSTERFS',
>>>> connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
>>>> connection='sjcstorage01:/vmstore', iqn='null',
>>>> vfsType='glusterfs', mountOptions='null',
>>>> nfsVersion='null', nfsRetrans='null',
>>>> nfsTimeo='null', iface='null',
>>>> netIfaceName='null'}]'}), log id: 39d3b568
>>>>
>>>> 2015-09-22 18:46:49,248 INFO
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
>>>> (default task-32) [1635a244] FINISH,
>>>> DisconnectStorageServerVDSCommand, return:
>>>> {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id:
>>>> 39d3b568
>>>>
>>>> 2015-09-22 18:46:49,252 INFO
>>>> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
>>>> (default task-32) [1635a244] Lock freed to object
>>>> 'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
>>>> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
>>>> ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
>>>> sharedLocks='null'}'
>>>>
>>>> 2015-09-22 18:46:49,431 INFO
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>> (DefaultQuartzScheduler_Worker-3) [] START,
>>>> GlusterVolumesListVDSCommand(HostName =
>>>> sjcstorage01,
>>>> GlusterVolumesListVDSParameters:{runAsync='true',
>>>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
>>>> log id: 17014ae8
>>>>
>>>> 2015-09-22 18:46:49,511 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-3) [] Could not
>>>> associate brick
>>>> 'sjcstorage01:/export/vmstore/brick01' of volume
>>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>>> network as no gluster network found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:49,515 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-3) [] Could not
>>>> associate brick
>>>> 'sjcstorage02:/export/vmstore/brick01' of volume
>>>> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
>>>> network as no gluster network found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:49,516 WARN
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>> (DefaultQuartzScheduler_Worker-3) [] Could not add
>>>> brick 'sjcvhost02:/export/vmstore/brick01' to
>>>> volume '030f270a-0999-4df4-9b14-ae56eb0a2fb9' -
>>>> server uuid '29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
>>>> not found in cluster
>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>
>>>> 2015-09-22 18:46:49,516 INFO
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>> (DefaultQuartzScheduler_Worker-3) [] FINISH,
>>>> GlusterVolumesListVDSCommand, return:
>>>> {030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
>>>> log id: 17014ae8
>>>>
>>>>
>>>>
>>>> ovirt engine thinks that sjcstorage01 is
>>>> sjcstorage01, its all testbed at the moment and is
>>>> all short names, defined in /etc/hosts (all copied
>>>> to each server for consistancy)
>>>>
>>>>
>>>> volume info for vmstore is
>>>>
>>>>
>>>> Status of volume: vmstore
>>>>
>>>> Gluster process TCP Port RDMA Port Online Pid
>>>>
>>>> ------------------------------------------------------------------------------
>>>>
>>>> Brick sjcstorage01:/export/vmstore/brick01 49157
>>>> 0 Y 7444
>>>>
>>>> Brick sjcstorage02:/export/vmstore/brick01 49157
>>>> 0 Y 4063
>>>>
>>>> Brick sjcvhost02:/export/vmstore/brick01 49156
>>>> 0 Y 3243
>>>>
>>>> NFS Server on localhost 2049 0 Y
>>>> 3268
>>>>
>>>> Self-heal Daemon on localhost N/A
>>>> N/A Y 3284
>>>>
>>>> NFS Server on sjcstorage01 2049 0
>>>> Y 7463
>>>>
>>>> Self-heal Daemon on sjcstorage01 N/A N/A
>>>> Y 7472
>>>>
>>>> NFS Server on sjcstorage02 2049 0
>>>> Y 4082
>>>>
>>>> Self-heal Daemon on sjcstorage02 N/A N/A
>>>> Y 4090
>>>>
>>>> Task Status of Volume vmstore
>>>>
>>>> ------------------------------------------------------------------------------
>>>>
>>>> There are no active volume tasks
>>>>
>>>>
>>>>
>>>> vdsm logs from time the domain is added
>>>>
>>>>
>>>> hread-789::DEBUG::2015-09-22
>>>> 19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-790::INFO::2015-09-22
>>>> 19:12:07,797::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: repoStats(options=None)
>>>>
>>>> Thread-790::INFO::2015-09-22
>>>> 19:12:07,797::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: repoStats, Return response: {}
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished:
>>>> {}
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0
>>>> aborting False
>>>>
>>>> Thread-790::DEBUG::2015-09-22
>>>> 19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>>> Accepting connection from 127.0.0.1:52510
>>>> <http://127.0.0.1:52510>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>>> Using required_size=11
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>>> Detected protocol xml from 127.0.0.1:52510
>>>> <http://127.0.0.1:52510>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>>> xml over http detected from ('127.0.0.1', 52510)
>>>>
>>>> BindingXMLRPC::INFO::2015-09-22
>>>> 19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>>> Starting request handler for 127.0.0.1:52510
>>>> <http://127.0.0.1:52510>
>>>>
>>>> Thread-791::INFO::2015-09-22
>>>> 19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52510
>>>> <http://127.0.0.1:52510> started
>>>>
>>>> Thread-791::INFO::2015-09-22
>>>> 19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52510
>>>> <http://127.0.0.1:52510> stopped
>>>>
>>>> Thread-792::DEBUG::2015-09-22
>>>> 19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-793::INFO::2015-09-22
>>>> 19:12:22,832::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: repoStats(options=None)
>>>>
>>>> Thread-793::INFO::2015-09-22
>>>> 19:12:22,832::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: repoStats, Return response: {}
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished:
>>>> {}
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0
>>>> aborting False
>>>>
>>>> Thread-793::DEBUG::2015-09-22
>>>> 19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>>> Accepting connection from 127.0.0.1:52511
>>>> <http://127.0.0.1:52511>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>>> Using required_size=11
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>>> Detected protocol xml from 127.0.0.1:52511
>>>> <http://127.0.0.1:52511>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>>> xml over http detected from ('127.0.0.1', 52511)
>>>>
>>>> BindingXMLRPC::INFO::2015-09-22
>>>> 19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>>> Starting request handler for 127.0.0.1:52511
>>>> <http://127.0.0.1:52511>
>>>>
>>>> Thread-794::INFO::2015-09-22
>>>> 19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52511
>>>> <http://127.0.0.1:52511> started
>>>>
>>>> Thread-794::INFO::2015-09-22
>>>> 19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52511
>>>> <http://127.0.0.1:52511> stopped
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Calling 'StoragePool.connectStorageServer' in
>>>> bridge with {u'connectionParams': [{u'id':
>>>> u'00000000-0000-0000-0000-000000000000',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], u'storagepoolID':
>>>> u'00000000-0000-0000-0000-000000000000',
>>>> u'domainType': 7}
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-795::INFO::2015-09-22
>>>> 19:12:35,521::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: connectStorageServer(domType=7,
>>>> spUUID=u'00000000-0000-0000-0000-000000000000',
>>>> conList=[{u'id':
>>>> u'00000000-0000-0000-0000-000000000000',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], options=None)
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir)
>>>> Creating directory:
>>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
>>>> mode: None
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd)
>>>> /usr/bin/sudo -n /usr/bin/systemd-run --scope
>>>> --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs
>>>> -o backup-volfile-servers=sjcstorage02:sjcvhost02
>>>> sjcstorage01:/vmstore
>>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
>>>> (cwd None)
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
>>>> glusterDomPath: glusterSD/*
>>>>
>>>> Thread-796::DEBUG::2015-09-22
>>>> 19:12:35,707::__init__::298::IOProcessClient::(_run) Starting
>>>> IOProcess...
>>>>
>>>> Thread-797::DEBUG::2015-09-22
>>>> 19:12:35,712::__init__::298::IOProcessClient::(_run) Starting
>>>> IOProcess...
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains)
>>>> Found SD uuids: ()
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer)
>>>> knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
>>>> storage.glusterSD.findDomain,
>>>> 597d5b5b-7c09-4de9-8840-6993bd9b61a6:
>>>> storage.glusterSD.findDomain,
>>>> ef17fec4-fecf-4d7e-b815-d1db4ef65225:
>>>> storage.glusterSD.findDomain}
>>>>
>>>> Thread-795::INFO::2015-09-22
>>>> 19:12:35,721::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: connectStorageServer, Return
>>>> response: {'statuslist': [{'status': 0, 'id':
>>>> u'00000000-0000-0000-0000-000000000000'}]}
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished:
>>>> {'statuslist': [{'status': 0, 'id':
>>>> u'00000000-0000-0000-0000-000000000000'}]}
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0
>>>> aborting False
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Return 'StoragePool.connectStorageServer' in bridge
>>>> with [{'status': 0, 'id':
>>>> u'00000000-0000-0000-0000-000000000000'}]
>>>>
>>>> Thread-795::DEBUG::2015-09-22
>>>> 19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Calling 'StoragePool.connectStorageServer' in
>>>> bridge with {u'connectionParams': [{u'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], u'storagepoolID':
>>>> u'00000000-0000-0000-0000-000000000000',
>>>> u'domainType': 7}
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-798::INFO::2015-09-22
>>>> 19:12:35,776::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: connectStorageServer(domType=7,
>>>> spUUID=u'00000000-0000-0000-0000-000000000000',
>>>> conList=[{u'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], options=None)
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
>>>> glusterDomPath: glusterSD/*
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains)
>>>> Found SD uuids: ()
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer)
>>>> knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
>>>> storage.glusterSD.findDomain,
>>>> 597d5b5b-7c09-4de9-8840-6993bd9b61a6:
>>>> storage.glusterSD.findDomain,
>>>> ef17fec4-fecf-4d7e-b815-d1db4ef65225:
>>>> storage.glusterSD.findDomain}
>>>>
>>>> Thread-798::INFO::2015-09-22
>>>> 19:12:35,782::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: connectStorageServer, Return
>>>> response: {'statuslist': [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished:
>>>> {'statuslist': [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0
>>>> aborting False
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Return 'StoragePool.connectStorageServer' in bridge
>>>> with [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>>>>
>>>> Thread-798::DEBUG::2015-09-22
>>>> 19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Calling 'StorageDomain.create' in bridge with
>>>> {u'name': u'sjcvmstore01', u'domainType': 7,
>>>> u'domainClass': 1, u'typeArgs':
>>>> u'sjcstorage01:/vmstore', u'version': u'3',
>>>> u'storagedomainID':
>>>> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-801::INFO::2015-09-22
>>>> 19:12:35,788::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: createStorageDomain(storageType=7,
>>>> sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
>>>> domainName=u'sjcvmstore01',
>>>> typeSpecificArg=u'sjcstorage01:/vmstore',
>>>> domClass=1, domVersion=u'3', options=None)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method
>>>> (storage.sdc.refreshStorage)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method (storage.iscsi.rescan)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::iscsi::431::Storage.ISCSI::(rescan)
>>>> Performing SCSI scan, this will take up to 30 seconds
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
>>>> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd
>>>> None)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,821::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,821::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method (storage.hba.rescan)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,821::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,821::hba::56::Storage.HBA::(rescan)
>>>> Starting scan
>>>>
>>>> Thread-802::DEBUG::2015-09-22
>>>> 19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,912::hba::62::Storage.HBA::(rescan) Scan
>>>> finished
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,912::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
>>>> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan)
>>>> SUCCESS: <err> = ''; <rc> = 0
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,936::utils::661::root::(execCmd)
>>>> /sbin/udevadm settle --timeout=5 (cwd None)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,946::utils::679::root::(execCmd) SUCCESS:
>>>> <err> = ''; <rc> = 0
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,948::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-801::ERROR::2015-09-22
>>>> 19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
>>>> looking for unfetched domain
>>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>>>>
>>>> Thread-801::ERROR::2015-09-22
>>>> 19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
>>>> looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs)
>>>> Operation 'lvm reload operation' got the operation
>>>> mutex
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd)
>>>> /usr/bin/sudo -n /usr/sbin/lvm vgs --config '
>>>> devices { preferred_names = ["^/dev/mapper/"]
>>>> ignore_suspended_devices=1 write_cache_state=0
>>>> disable_after_error_count=3
>>>> obtain_device_list_from_udev=0 filter = [
>>>> '\''r|.*|'\'' ] } global { locking_type=1
>>>> prioritise_write_locks=1 wait_for_locks=1
>>>> use_lvmetad=0 } backup { retain_min = 50
>>>> retain_days = 0 } ' --noheadings --units b
>>>> --nosuffix --separator '|' --ignoreskippedcluster
>>>> -o
>>>> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
>>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd)
>>>> FAILED: <err> = ' WARNING: lvmetad is running but
>>>> disabled. Restart lvmetad before enabling it!\n
>>>> Volume group
>>>> "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found\n
>>>> Cannot process volume group
>>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5
>>>>
>>>> Thread-801::WARNING::2015-09-22
>>>> 19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs)
>>>> lvm vgs failed: 5 [] [' WARNING: lvmetad is
>>>> running but disabled. Restart lvmetad before
>>>> enabling it!', ' Volume group
>>>> "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found',
>>>> ' Cannot process volume group
>>>> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs)
>>>> Operation 'lvm reload operation' released the
>>>> operation mutex
>>>>
>>>> Thread-801::ERROR::2015-09-22
>>>> 19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
>>>> domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found
>>>>
>>>> Traceback (most recent call last):
>>>>
>>>> File "/usr/share/vdsm/storage/sdc.py", line 142,
>>>> in _findDomain
>>>>
>>>> dom = findMethod(sdUUID)
>>>>
>>>> File "/usr/share/vdsm/storage/sdc.py", line 172,
>>>> in _findUnfetchedDomain
>>>>
>>>> raise se.StorageDomainDoesNotExist(sdUUID)
>>>>
>>>> StorageDomainDoesNotExist: Storage domain does not
>>>> exist: (u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)
>>>>
>>>> Thread-801::INFO::2015-09-22
>>>> 19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
>>>> sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>>>> domainName=sjcvmstore01
>>>> remotePath=sjcstorage01:/vmstore domClass=1
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,015::__init__::298::IOProcessClient::(_run) Starting
>>>> IOProcess...
>>>>
>>>> Thread-801::ERROR::2015-09-22
>>>> 19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected
>>>> error
>>>>
>>>> Traceback (most recent call last):
>>>>
>>>> File "/usr/share/vdsm/storage/task.py", line 873,
>>>> in _run
>>>>
>>>> return fn(*args, **kargs)
>>>>
>>>> File "/usr/share/vdsm/logUtils.py", line 49, in
>>>> wrapper
>>>>
>>>> res = f(*args, **kwargs)
>>>>
>>>> File "/usr/share/vdsm/storage/hsm.py", line 2697,
>>>> in createStorageDomain
>>>>
>>>> domVersion)
>>>>
>>>> File "/usr/share/vdsm/storage/nfsSD.py", line 84,
>>>> in create
>>>>
>>>> remotePath, storageType, version)
>>>>
>>>> File "/usr/share/vdsm/storage/fileSD.py", line
>>>> 264, in _prepareMetadata
>>>>
>>>> "create meta file '%s' failed: %s" % (metaFile,
>>>> str(e)))
>>>>
>>>> StorageDomainMetadataCreationError: Error creating
>>>> a storage domain's metadata: ("create meta file
>>>> 'outbox' failed: [Errno 5] Input/output error",)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
>>>> d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
>>>> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
>>>> u'sjcvmstore01', u'sjcstorage01:/vmstore', 1, u'3')
>>>> {} failed - stopping task
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping
>>>> in state preparing (force False)
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1
>>>> aborting True
>>>>
>>>> Thread-801::INFO::2015-09-22
>>>> 19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting:
>>>> Task is aborted: "Error creating a storage domain's
>>>> metadata" - code 362
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare:
>>>> aborted: Error creating a storage domain's metadata
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0
>>>> aborting True
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort:
>>>> force False
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
>>>> from state preparing -> state aborting
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting:
>>>> recover policy none
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
>>>> from state aborting -> state failed
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-801::ERROR::2015-09-22
>>>> 19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper)
>>>> {'status': {'message': 'Error creating a storage
>>>> domain\'s metadata: ("create meta file \'outbox\'
>>>> failed: [Errno 5] Input/output error",)', 'code': 362}}
>>>>
>>>> Thread-801::DEBUG::2015-09-22
>>>> 19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Calling 'StoragePool.disconnectStorageServer' in
>>>> bridge with {u'connectionParams': [{u'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], u'storagepoolID':
>>>> u'00000000-0000-0000-0000-000000000000',
>>>> u'domainType': 7}
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-807::INFO::2015-09-22
>>>> 19:12:36,182::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: disconnectStorageServer(domType=7,
>>>> spUUID=u'00000000-0000-0000-0000-000000000000',
>>>> conList=[{u'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
>>>> u'connection': u'sjcstorage01:/vmstore', u'iqn':
>>>> u'', u'user': u'', u'tpgt': u'1', u'vfs_type':
>>>> u'glusterfs', u'password': '********', u'port':
>>>> u''}], options=None)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd)
>>>> /usr/bin/sudo -n /usr/bin/umount -f -l
>>>> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
>>>> (cwd None)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method
>>>> (storage.sdc.refreshStorage)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method (storage.iscsi.rescan)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,223::iscsi::431::Storage.ISCSI::(rescan)
>>>> Performing SCSI scan, this will take up to 30 seconds
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
>>>> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd
>>>> None)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,258::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,258::misc::733::Storage.SamplingMethod::(__call__)
>>>> Trying to enter sampling method (storage.hba.rescan)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,258::misc::736::Storage.SamplingMethod::(__call__)
>>>> Got in to sampling method
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,258::hba::56::Storage.HBA::(rescan)
>>>> Starting scan
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,350::hba::62::Storage.HBA::(rescan) Scan
>>>> finished
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,350::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
>>>> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan)
>>>> SUCCESS: <err> = ''; <rc> = 0
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,374::utils::661::root::(execCmd)
>>>> /sbin/udevadm settle --timeout=5 (cwd None)
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,383::utils::679::root::(execCmd) SUCCESS:
>>>> <err> = ''; <rc> = 0
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
>>>> Operation 'lvm invalidate operation' got the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
>>>> Operation 'lvm invalidate operation' released the
>>>> operation mutex
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,386::misc::743::Storage.SamplingMethod::(__call__)
>>>> Returning last result
>>>>
>>>> Thread-807::INFO::2015-09-22
>>>> 19:12:36,386::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: disconnectStorageServer, Return
>>>> response: {'statuslist': [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished:
>>>> {'statuslist': [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0
>>>> aborting False
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
>>>> Return 'StoragePool.disconnectStorageServer' in
>>>> bridge with [{'status': 0, 'id':
>>>> u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>>>>
>>>> Thread-807::DEBUG::2015-09-22
>>>> 19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-808::INFO::2015-09-22
>>>> 19:12:37,868::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: repoStats(options=None)
>>>>
>>>> Thread-808::INFO::2015-09-22
>>>> 19:12:37,868::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: repoStats, Return response: {}
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished:
>>>> {}
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0
>>>> aborting False
>>>>
>>>> Thread-808::DEBUG::2015-09-22
>>>> 19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>>> Accepting connection from 127.0.0.1:52512
>>>> <http://127.0.0.1:52512>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>>> Using required_size=11
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>>> Detected protocol xml from 127.0.0.1:52512
>>>> <http://127.0.0.1:52512>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>>> xml over http detected from ('127.0.0.1', 52512)
>>>>
>>>> BindingXMLRPC::INFO::2015-09-22
>>>> 19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>>> Starting request handler for 127.0.0.1:52512
>>>> <http://127.0.0.1:52512>
>>>>
>>>> Thread-809::INFO::2015-09-22
>>>> 19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52512
>>>> <http://127.0.0.1:52512> started
>>>>
>>>> Thread-809::INFO::2015-09-22
>>>> 19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52512
>>>> <http://127.0.0.1:52512> stopped
>>>>
>>>> Thread-810::DEBUG::2015-09-22
>>>> 19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-811::INFO::2015-09-22
>>>> 19:12:52,902::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: repoStats(options=None)
>>>>
>>>> Thread-811::INFO::2015-09-22
>>>> 19:12:52,902::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: repoStats, Return response: {}
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished:
>>>> {}
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0
>>>> aborting False
>>>>
>>>> Thread-811::DEBUG::2015-09-22
>>>> 19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>>> Accepting connection from 127.0.0.1:52513
>>>> <http://127.0.0.1:52513>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>>> Using required_size=11
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>>> Detected protocol xml from 127.0.0.1:52513
>>>> <http://127.0.0.1:52513>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>>> xml over http detected from ('127.0.0.1', 52513)
>>>>
>>>> BindingXMLRPC::INFO::2015-09-22
>>>> 19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>>> Starting request handler for 127.0.0.1:52513
>>>> <http://127.0.0.1:52513>
>>>>
>>>> Thread-812::INFO::2015-09-22
>>>> 19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52513
>>>> <http://127.0.0.1:52513> started
>>>>
>>>> Thread-812::INFO::2015-09-22
>>>> 19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52513
>>>> <http://127.0.0.1:52513> stopped
>>>>
>>>> Thread-813::DEBUG::2015-09-22
>>>> 19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
>>>> from state init -> state preparing
>>>>
>>>> Thread-814::INFO::2015-09-22
>>>> 19:13:07,935::logUtils::48::dispatcher::(wrapper)
>>>> Run and protect: repoStats(options=None)
>>>>
>>>> Thread-814::INFO::2015-09-22
>>>> 19:13:07,935::logUtils::51::dispatcher::(wrapper)
>>>> Run and protect: repoStats, Return response: {}
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
>>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished:
>>>> {}
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
>>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
>>>> from state preparing -> state finished
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
>>>> Owner.releaseAll requests {} resources {}
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
>>>> Owner.cancelAll requests {}
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
>>>> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0
>>>> aborting False
>>>>
>>>> Thread-814::DEBUG::2015-09-22
>>>> 19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
>>>> Accepting connection from 127.0.0.1:52515
>>>> <http://127.0.0.1:52515>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
>>>> Using required_size=11
>>>>
>>>> Reactor thread::INFO::2015-09-22
>>>> 19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
>>>> Detected protocol xml from 127.0.0.1:52515
>>>> <http://127.0.0.1:52515>
>>>>
>>>> Reactor thread::DEBUG::2015-09-22
>>>> 19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket)
>>>> xml over http detected from ('127.0.0.1', 52515)
>>>>
>>>> BindingXMLRPC::INFO::2015-09-22
>>>> 19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>>> Starting request handler for 127.0.0.1:52515
>>>> <http://127.0.0.1:52515>
>>>>
>>>> Thread-815::INFO::2015-09-22
>>>> 19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52515
>>>> <http://127.0.0.1:52515> started
>>>>
>>>> Thread-815::INFO::2015-09-22
>>>> 19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
>>>> Request handler for 127.0.0.1:52515
>>>> <http://127.0.0.1:52515> stopped
>>>>
>>>> Thread-816::DEBUG::2015-09-22
>>>> 19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send)
>>>> Sending response
>>>>
>>>>
>>>>
>>>> gluster logs
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> 1: volume vmstore-client-0
>>>>
>>>> 2: type protocol/client
>>>>
>>>> 3: option ping-timeout 42
>>>>
>>>> 4: option remote-host sjcstorage01
>>>>
>>>> 5: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 6: option transport-type socket
>>>>
>>>> 7: option send-gids true
>>>>
>>>> 8: end-volume
>>>>
>>>> 9:
>>>>
>>>> 10: volume vmstore-client-1
>>>>
>>>> 11: type protocol/client
>>>>
>>>> 12: option ping-timeout 42
>>>>
>>>> 13: option remote-host sjcstorage02
>>>>
>>>> 14: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 15: option transport-type socket
>>>>
>>>> 16: option send-gids true
>>>>
>>>> 17: end-volume
>>>>
>>>> 18:
>>>>
>>>> 19: volume vmstore-client-2
>>>>
>>>> 20: type protocol/client
>>>>
>>>> 21: option ping-timeout 42
>>>>
>>>> 22: option remote-host sjcvhost02
>>>>
>>>> 23: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 24: option transport-type socket
>>>>
>>>> 25: option send-gids true
>>>>
>>>> 26: end-volume
>>>>
>>>> 27:
>>>>
>>>> 28: volume vmstore-replicate-0
>>>>
>>>> 29: type cluster/replicate
>>>>
>>>> 30: option arbiter-count 1
>>>>
>>>> 31: subvolumes vmstore-client-0 vmstore-client-1
>>>> vmstore-client-2
>>>>
>>>> 32: end-volume
>>>>
>>>> 33:
>>>>
>>>> 34: volume vmstore-dht
>>>>
>>>> 35: type cluster/distribute
>>>>
>>>> 36: subvolumes vmstore-replicate-0
>>>>
>>>> 37: end-volume
>>>>
>>>> 38:
>>>>
>>>> 39: volume vmstore-write-behind
>>>>
>>>> 40: type performance/write-behind
>>>>
>>>> 41: subvolumes vmstore-dht
>>>>
>>>> 42: end-volume
>>>>
>>>> 43:
>>>>
>>>> 44: volume vmstore-read-ahead
>>>>
>>>> 45: type performance/read-ahead
>>>>
>>>> 46: subvolumes vmstore-write-behind
>>>>
>>>> 47: end-volume
>>>>
>>>> 48:
>>>>
>>>> 49: volume vmstore-readdir-ahead
>>>>
>>>> 50: type performance/readdir-ahead
>>>>
>>>> 52: end-volume
>>>>
>>>> 53:
>>>>
>>>> 54: volume vmstore-io-cache
>>>>
>>>> 55: type performance/io-cache
>>>>
>>>> 56: subvolumes vmstore-readdir-ahead
>>>>
>>>> 57: end-volume
>>>>
>>>> 58:
>>>>
>>>> 59: volume vmstore-quick-read
>>>>
>>>> 60: type performance/quick-read
>>>>
>>>> 61: subvolumes vmstore-io-cache
>>>>
>>>> 62: end-volume
>>>>
>>>> 63:
>>>>
>>>> 64: volume vmstore-open-behind
>>>>
>>>> 65: type performance/open-behind
>>>>
>>>> 66: subvolumes vmstore-quick-read
>>>>
>>>> 67: end-volume
>>>>
>>>> 68:
>>>>
>>>> 69: volume vmstore-md-cache
>>>>
>>>> 70: type performance/md-cache
>>>>
>>>> 71: subvolumes vmstore-open-behind
>>>>
>>>> 72: end-volume
>>>>
>>>> 73:
>>>>
>>>> 74: volume vmstore
>>>>
>>>> 75: type debug/io-stats
>>>>
>>>> 76: option latency-measurement off
>>>>
>>>> 77: option count-fop-hits off
>>>>
>>>> 78: subvolumes vmstore-md-cache
>>>>
>>>> 79: end-volume
>>>>
>>>> 80:
>>>>
>>>> 81: volume meta-autoload
>>>>
>>>> 82: type meta
>>>>
>>>> 83: subvolumes vmstore
>>>>
>>>> 84: end-volume
>>>>
>>>> 85:
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> [2015-09-22 05:29:07.586205] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-0: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:29:07.586325] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-1: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:29:07.586480] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-2: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:29:07.595052] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-0: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:29:07.595397] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-1: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:29:07.595576] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-2: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:29:07.595721] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Connected to vmstore-client-0,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:29:07.595738] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:29:07.596044] I [MSGID: 108005]
>>>> [afr-common.c:3998:afr_notify]
>>>> 0-vmstore-replicate-0: Subvolume 'vmstore-client-0'
>>>> came back up; going online.
>>>>
>>>> [2015-09-22 05:29:07.596170] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Connected to vmstore-client-1,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>>> attached to remote volume :
>>>>
>>>> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:29:07.596506] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:29:07.608758] I
>>>> [fuse-bridge.c:5053:fuse_graph_setup] 0-fuse:
>>>> switched to graph 0
>>>>
>>>> [2015-09-22 05:29:07.608910] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-0: Server lk version = 1
>>>>
>>>> [2015-09-22 05:29:07.608936] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-1: Server lk version = 1
>>>>
>>>> [2015-09-22 05:29:07.608950] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-2: Server lk version = 1
>>>>
>>>> [2015-09-22 05:29:07.609695] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 2
>>>>
>>>> [2015-09-22 05:29:07.609868] I
>>>> [fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse:
>>>> FUSE inited with protocol versions: glusterfs 7.22
>>>> kernel 7.22
>>>>
>>>> [2015-09-22 05:29:07.616577] I [MSGID: 109063]
>>>> [dht-layout.c:702:dht_layout_normalize]
>>>> 0-vmstore-dht: Found anomalies in / (gfid =
>>>> 00000000-0000-0000-0000-000000000001). Holes=1
>>>> overlaps=0
>>>>
>>>> [2015-09-22 05:29:07.620230] I [MSGID: 109036]
>>>> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
>>>> 0-vmstore-dht: Setting layout of / with
>>>> [Subvol_name: vmstore-replicate-0, Err: -1 , Start:
>>>> 0 , Stop: 4294967295 , Hash: 1 ],
>>>>
>>>> [2015-09-22 05:29:08.122415] W
>>>> [fuse-bridge.c:1230:fuse_err_cbk] 0-glusterfs-fuse:
>>>> 26: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No
>>>> data available)
>>>>
>>>> [2015-09-22 05:29:08.137359 <tel:137359>] I [MSGID:
>>>> 109036]
>>>> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
>>>> 0-vmstore-dht: Setting layout of
>>>> /061b73d5-ae59-462e-b674-ea9c60d436c2 with
>>>> [Subvol_name: vmstore-replicate-0, Err: -1 , Start:
>>>> 0 , Stop: 4294967295 , Hash: 1 ],
>>>>
>>>> [2015-09-22 05:29:08.145835] I [MSGID: 109036]
>>>> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
>>>> 0-vmstore-dht: Setting layout of
>>>> /061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
>>>> [Subvol_name: vmstore-replicate-0, Err: -1 , Start:
>>>> 0 , Stop: 4294967295 , Hash: 1 ],
>>>>
>>>> [2015-09-22 05:30:57.897819] I [MSGID: 100030]
>>>> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs:
>>>> Started running /usr/sbin/glusterfs version 3.7.4
>>>> (args: /usr/sbin/glusterfs
>>>> --volfile-server=sjcvhost02
>>>> --volfile-server=sjcstorage01
>>>> --volfile-server=sjcstorage02 --volfile-id=/vmstore
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>>>>
>>>> [2015-09-22 05:30:57.909889] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 1
>>>>
>>>> [2015-09-22 05:30:57.923087] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-0: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:30:57.925701] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-1: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:30:57.927984] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-2: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> Final graph:
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> 1: volume vmstore-client-0
>>>>
>>>> 2: type protocol/client
>>>>
>>>> 3: option ping-timeout 42
>>>>
>>>> 4: option remote-host sjcstorage01
>>>>
>>>> 5: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 6: option transport-type socket
>>>>
>>>> 7: option send-gids true
>>>>
>>>> 8: end-volume
>>>>
>>>> 9:
>>>>
>>>> 10: volume vmstore-client-1
>>>>
>>>> 11: type protocol/client
>>>>
>>>> 12: option ping-timeout 42
>>>>
>>>> 13: option remote-host sjcstorage02
>>>>
>>>> 14: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 15: option transport-type socket
>>>>
>>>> 16: option send-gids true
>>>>
>>>> 17: end-volume
>>>>
>>>> 18:
>>>>
>>>> 19: volume vmstore-client-2
>>>>
>>>> 20: type protocol/client
>>>>
>>>> 21: option ping-timeout 42
>>>>
>>>> 22: option remote-host sjcvhost02
>>>>
>>>> 23: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 24: option transport-type socket
>>>>
>>>> 25: option send-gids true
>>>>
>>>> 26: end-volume
>>>>
>>>> 27:
>>>>
>>>> 28: volume vmstore-replicate-0
>>>>
>>>> 29: type cluster/replicate
>>>>
>>>> 30: option arbiter-count 1
>>>>
>>>> 31: subvolumes vmstore-client-0 vmstore-client-1
>>>> vmstore-client-2
>>>>
>>>> 32: end-volume
>>>>
>>>> 33:
>>>>
>>>> 34: volume vmstore-dht
>>>>
>>>> 35: type cluster/distribute
>>>>
>>>> 36: subvolumes vmstore-replicate-0
>>>>
>>>> 37: end-volume
>>>>
>>>> 38:
>>>>
>>>> 39: volume vmstore-write-behind
>>>>
>>>> 40: type performance/write-behind
>>>>
>>>> 41: subvolumes vmstore-dht
>>>>
>>>> 42: end-volume
>>>>
>>>> 43:
>>>>
>>>> 44: volume vmstore-read-ahead
>>>>
>>>> 45: type performance/read-ahead
>>>>
>>>> 46: subvolumes vmstore-write-behind
>>>>
>>>> 47: end-volume
>>>>
>>>> 48:
>>>>
>>>> 49: volume vmstore-readdir-ahead
>>>>
>>>> 50: type performance/readdir-ahead
>>>>
>>>> 51: subvolumes vmstore-read-ahead
>>>>
>>>> 52: end-volume
>>>>
>>>> 53:
>>>>
>>>> 54: volume vmstore-io-cache
>>>>
>>>> 55: type performance/io-cache
>>>>
>>>> 56: subvolumes vmstore-readdir-ahead
>>>>
>>>> 57: end-volume
>>>>
>>>> 58:
>>>>
>>>> 59: volume vmstore-quick-read
>>>>
>>>> 60: type performance/quick-read
>>>>
>>>> 61: subvolumes vmstore-io-cache
>>>>
>>>> 62: end-volume
>>>>
>>>> 63:
>>>>
>>>> 64: volume vmstore-open-behind
>>>>
>>>> 65: type performance/open-behind
>>>>
>>>> 66: subvolumes vmstore-quick-read
>>>>
>>>> 67: end-volume
>>>>
>>>> 68:
>>>>
>>>> 69: volume vmstore-md-cache
>>>>
>>>> 70: type performance/md-cache
>>>>
>>>> 71: subvolumes vmstore-open-behind
>>>>
>>>> 72: end-volume
>>>>
>>>> 73:
>>>>
>>>> 74: volume vmstore
>>>>
>>>> 75: type debug/io-stats
>>>>
>>>> 76: option latency-measurement off
>>>>
>>>> 77: option count-fop-hits off
>>>>
>>>> 78: subvolumes vmstore-md-cache
>>>>
>>>> 79: end-volume
>>>>
>>>> 80:
>>>>
>>>> 81: volume meta-autoload
>>>>
>>>> 82: type meta
>>>>
>>>> 83: subvolumes vmstore
>>>>
>>>> 84: end-volume
>>>>
>>>> 85:
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> [2015-09-22 05:30:57.934021] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-0: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:30:57.934145] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-1: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:30:57.934491] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-2: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:30:57.942198] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-0: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:30:57.942545] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-1: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:30:57.942659] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-2: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:30:57.942797] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Connected to vmstore-client-0,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:30:57.942808] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:30:57.943036] I [MSGID: 108005]
>>>> [afr-common.c:3998:afr_notify]
>>>> 0-vmstore-replicate-0: Subvolume 'vmstore-client-0'
>>>> came back up; going online.
>>>>
>>>> [2015-09-22 05:30:57.943078] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Connected to vmstore-client-1,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:30:57.943086] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:30:57.943292] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:30:57.943302] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:30:57.953887] I
>>>> [fuse-bridge.c:5053:fuse_graph_setup] 0-fuse:
>>>> switched to graph 0
>>>>
>>>> [2015-09-22 05:30:57.954071] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-0: Server lk version = 1
>>>>
>>>> [2015-09-22 05:30:57.954105] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-1: Server lk version = 1
>>>>
>>>> [2015-09-22 05:30:57.954124] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-2: Server lk version = 1
>>>>
>>>> [2015-09-22 05:30:57.955282] I
>>>> [fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse:
>>>> FUSE inited with protocol versions: glusterfs 7.22
>>>> kernel 7.22
>>>>
>>>> [2015-09-22 05:30:57.955738] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 2
>>>>
>>>> [2015-09-22 05:30:57.970232] I
>>>> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse:
>>>> unmounting
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>>>>
>>>> [2015-09-22 05:30:57.970834] W
>>>> [glusterfsd.c:1219:cleanup_and_exit]
>>>> (-->/lib64/libpthread.so.0(+0x7df5)
>>>> [0x7f187139fdf5]
>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>>>> [0x7f1872a09785]
>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
>>>> [0x7f1872a09609] ) 0-: received signum (15),
>>>> shutting down
>>>>
>>>> [2015-09-22 05:30:57.970848] I
>>>> [fuse-bridge.c:5595:fini] 0-fuse: Unmounting
>>>> '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>>>>
>>>> [2015-09-22 05:30:58.420973] I
>>>> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse:
>>>> unmounting
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>>>>
>>>> [2015-09-22 05:30:58.421355] W
>>>> [glusterfsd.c:1219:cleanup_and_exit]
>>>> (-->/lib64/libpthread.so.0(+0x7df5)
>>>> [0x7f8267cd4df5]
>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>>>> [0x7f826933e785]
>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
>>>> [0x7f826933e609] ) 0-: received signum (15),
>>>> shutting down
>>>>
>>>> [2015-09-22 05:30:58.421369] I
>>>> [fuse-bridge.c:5595:fini] 0-fuse: Unmounting
>>>> '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>>>>
>>>> [2015-09-22 05:31:09.534410] I [MSGID: 100030]
>>>> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs:
>>>> Started running /usr/sbin/glusterfs version 3.7.4
>>>> (args: /usr/sbin/glusterfs
>>>> --volfile-server=sjcvhost02
>>>> --volfile-server=sjcstorage01
>>>> --volfile-server=sjcstorage02 --volfile-id=/vmstore
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>>>>
>>>> [2015-09-22 05:31:09.545686] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 1
>>>>
>>>> [2015-09-22 05:31:09.553019] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-0: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:31:09.555552] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-1: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:31:09.557989] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-2: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> Final graph:
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> 1: volume vmstore-client-0
>>>>
>>>> 2: type protocol/client
>>>>
>>>> 3: option ping-timeout 42
>>>>
>>>> 4: option remote-host sjcstorage01
>>>>
>>>> 5: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 6: option transport-type socket
>>>>
>>>> 7: option send-gids true
>>>>
>>>> 8: end-volume
>>>>
>>>> 9:
>>>>
>>>> 10: volume vmstore-client-1
>>>>
>>>> 11: type protocol/client
>>>>
>>>> 12: option ping-timeout 42
>>>>
>>>> 13: option remote-host sjcstorage02
>>>>
>>>> 14: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 15: option transport-type socket
>>>>
>>>> 16: option send-gids true
>>>>
>>>> 17: end-volume
>>>>
>>>> 18:
>>>>
>>>> 19: volume vmstore-client-2
>>>>
>>>> 20: type protocol/client
>>>>
>>>> 21: option ping-timeout 42
>>>>
>>>> 22: option remote-host sjcvhost02
>>>>
>>>> 23: option remote-subvolume
>>>> /export/vmstore/brick01
>>>>
>>>> 24: option transport-type socket
>>>>
>>>> 25: option send-gids true
>>>>
>>>> 26: end-volume
>>>>
>>>> 27:
>>>>
>>>> 28: volume vmstore-replicate-0
>>>>
>>>> 29: type cluster/replicate
>>>>
>>>> 30: option arbiter-count 1
>>>>
>>>> 31: subvolumes vmstore-client-0 vmstore-client-1
>>>> vmstore-client-2
>>>>
>>>> 32: end-volume
>>>>
>>>> 33:
>>>>
>>>> 34: volume vmstore-dht
>>>>
>>>> 35: type cluster/distribute
>>>>
>>>> 36: subvolumes vmstore-replicate-0
>>>>
>>>> 37: end-volume
>>>>
>>>> 38:
>>>>
>>>> 39: volume vmstore-write-behind
>>>>
>>>> 40: type performance/write-behind
>>>>
>>>> 41: subvolumes vmstore-dht
>>>>
>>>> 42: end-volume
>>>>
>>>> 43:
>>>>
>>>> 44: volume vmstore-read-ahead
>>>>
>>>> 45: type performance/read-ahead
>>>>
>>>> 46: subvolumes vmstore-write-behind
>>>>
>>>> 47: end-volume
>>>>
>>>> 48:
>>>>
>>>> 49: volume vmstore-readdir-ahead
>>>>
>>>> 50: type performance/readdir-ahead
>>>>
>>>> 51: subvolumes vmstore-read-ahead
>>>>
>>>> 52: end-volume
>>>>
>>>> 53:
>>>>
>>>> 54: volume vmstore-io-cache
>>>>
>>>> 55: type performance/io-cache
>>>>
>>>> 56: subvolumes vmstore-readdir-ahead
>>>>
>>>> 57: end-volume
>>>>
>>>> 58:
>>>>
>>>> 59: volume vmstore-quick-read
>>>>
>>>> 60: type performance/quick-read
>>>>
>>>> 61: subvolumes vmstore-io-cache
>>>>
>>>> 62: end-volume
>>>>
>>>> 63:
>>>>
>>>> 64: volume vmstore-open-behind
>>>>
>>>> 65: type performance/open-behind
>>>>
>>>> 66: subvolumes vmstore-quick-read
>>>>
>>>> 67: end-volume
>>>>
>>>> 68:
>>>>
>>>> 69: volume vmstore-md-cache
>>>>
>>>> 70: type performance/md-cache
>>>>
>>>> 71: subvolumes vmstore-open-behind
>>>>
>>>> 72: end-volume
>>>>
>>>> 73:
>>>>
>>>> 74: volume vmstore
>>>>
>>>> 75: type debug/io-stats
>>>>
>>>> 76: option latency-measurement off
>>>>
>>>> 77: option count-fop-hits off
>>>>
>>>> 78: subvolumes vmstore-md-cache
>>>>
>>>> 79: end-volume
>>>>
>>>> 80:
>>>>
>>>> 81: volume meta-autoload
>>>>
>>>> 82: type meta
>>>>
>>>> 83: subvolumes vmstore
>>>>
>>>> 84: end-volume
>>>>
>>>> 85:
>>>>
>>>> +------------------------------------------------------------------------------+
>>>>
>>>> [2015-09-22 05:31:09.563262] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-0: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:31:09.563431] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-1: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:31:09.563877] I
>>>> [rpc-clnt.c:1851:rpc_clnt_reconfig]
>>>> 0-vmstore-client-2: changing port to 49153 (from 0)
>>>>
>>>> [2015-09-22 05:31:09.572443] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-1: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:31:09.572599] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-0: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:31:09.572742] I [MSGID: 114057]
>>>> [client-handshake.c:1437:select_server_supported_programs]
>>>> 0-vmstore-client-2: Using Program GlusterFS 3.3,
>>>> Num (1298437), Version (330)
>>>>
>>>> [2015-09-22 05:31:09.573165] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Connected to vmstore-client-1,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:31:09.573186] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-1: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:31:09.573395] I [MSGID: 108005]
>>>> [afr-common.c:3998:afr_notify]
>>>> 0-vmstore-replicate-0: Subvolume 'vmstore-client-1'
>>>> came back up; going online.
>>>>
>>>> [2015-09-22 05:31:09.573427] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Connected to vmstore-client-0,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:31:09.573435] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-0: Server and Client lk-version
>>>> numbers are not same, reopening the fds
>>>>
>>>> [2015-09-22 05:31:09.573754] I [MSGID: 114046]
>>>> [client-handshake.c:1213:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Connected to vmstore-client-2,
>>>> attached to remote volume '/export/vmstore/brick01'.
>>>>
>>>> [2015-09-22 05:31:09.573783] I [MSGID: 114047]
>>>> [client-handshake.c:1224:client_setvolume_cbk]
>>>> 0-vmstore-client-2: Server and Client lk-version
>>>> numbers are not same, reopen:
>>>>
>>>> [2015-09-22 05:31:09.577192] I
>>>> [fuse-bridge.c:5053:fuse_graph_setup] 0-fuse:
>>>> switched to graph 0
>>>>
>>>> [2015-09-22 05:31:09.577302] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-1: Server lk version = 1
>>>>
>>>> [2015-09-22 05:31:09.577325] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-0: Server lk version = 1
>>>>
>>>> [2015-09-22 05:31:09.577339] I [MSGID: 114035]
>>>> [client-handshake.c:193:client_set_lk_version_cbk]
>>>> 0-vmstore-client-2: Server lk version = 1
>>>>
>>>> [2015-09-22 05:31:09.578125] I
>>>> [fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse:
>>>> FUSE inited with protocol versions: glusterfs 7.22
>>>> kernel 7.22
>>>>
>>>> [2015-09-22 05:31:09.578636] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 2
>>>>
>>>> [2015-09-22 05:31:10.073698] I
>>>> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse:
>>>> unmounting
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>>>>
>>>> [2015-09-22 05:31:10.073977] W
>>>> [glusterfsd.c:1219:cleanup_and_exit]
>>>> (-->/lib64/libpthread.so.0(+0x7df5)
>>>> [0x7f6b9ba88df5]
>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>>>> [0x7f6b9d0f2785]
>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
>>>> [0x7f6b9d0f2609] ) 0-: received signum (15),
>>>> shutting down
>>>>
>>>> [2015-09-22 05:31:10.073993] I
>>>> [fuse-bridge.c:5595:fini] 0-fuse: Unmounting
>>>> '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>>>>
>>>> [2015-09-22 05:31:20.184700] I [MSGID: 100030]
>>>> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs:
>>>> Started running /usr/sbin/glusterfs version 3.7.4
>>>> (args: /usr/sbin/glusterfs
>>>> --volfile-server=sjcvhost02
>>>> --volfile-server=sjcstorage01
>>>> --volfile-server=sjcstorage02 --volfile-id=/vmstore
>>>> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>>>>
>>>> [2015-09-22 05:31:20.194928] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker]
>>>> 0-epoll: Started thread with index 1
>>>>
>>>> [2015-09-22 05:31:20.200701] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-0: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:31:20.203110] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-1: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> [2015-09-22 05:31:20.205708] I [MSGID: 114020]
>>>> [client.c:2118:notify] 0-vmstore-client-2: parent
>>>> translators are ready, attempting connect on transport
>>>>
>>>> Final graph:
>>>>
>>>>
>>>>
>>>> Hope this helps.
>>>>
>>>>
>>>> thanks again
>>>>
>>>>
>>>> Brett Stevens
>>>>
>>>>
>>>>
>>>> On Tue, Sep 22, 2015 at 10:14 PM, Sahina Bose
>>>> <sabose(a)redhat.com> wrote:
>>>>
>>>>
>>>>
>>>> On 09/22/2015 02:17 PM, Brett Stevens wrote:
>>>>> Hi. First time on the lists. I've searched for
>>>>> this but no luck so sorry if this has been
>>>>> covered before.
>>>>>
>>>>> Im working with the latest 3.6 beta with the
>>>>> following infrastructure.
>>>>>
>>>>> 1 management host (to be used for a number of
>>>>> tasks so chose not to use self hosted, we are
>>>>> a school and will need to keep an eye on
>>>>> hardware costs)
>>>>> 2 compute nodes
>>>>> 2 gluster nodes
>>>>>
>>>>> so far built one gluster volume using the
>>>>> gluster cli to give me 2 nodes and one arbiter
>>>>> node (management host)
>>>>>
>>>>> so far, every time I create a volume, it shows
>>>>> up strait away on the ovirt gui. however no
>>>>> matter what I try, I cannot create or import
>>>>> it as a data domain.
>>>>>
>>>>> the current error in the ovirt gui is "Error
>>>>> while executing action
>>>>> AddGlusterFsStorageDomain: Error creating a
>>>>> storage domain's metadata"
>>>>
>>>> Please provide vdsm and gluster logs
>>>>
>>>>>
>>>>> logs, continuously rolling the following
>>>>> errors around
>>>>>
>>>>> Scheduler_Worker-53) [] START,
>>>>> GlusterVolumesListVDSCommand(HostName =
>>>>> sjcstorage02,
>>>>> GlusterVolumesListVDSParameters:{runAsync='true',
>>>>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
>>>>> log id: 24198fbf
>>>>>
>>>>> 2015-09-22 03:57:29,903 WARN
>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>>> (DefaultQuartzScheduler_Worker-53) [] Could
>>>>> not associate brick
>>>>> 'sjcstorage01:/export/vmstore/brick01' of
>>>>> volume '878a316d-2394-4aae-bdf8-e10eea38225e'
>>>>> with correct network as no gluster network
>>>>> found in cluster
>>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>>
>>>>
>>>> What is the hostname provided in ovirt engine
>>>> for sjcstorage01 ? Does this host have multiple
>>>> nics?
>>>>
>>>> Could you provide output of gluster volume info?
>>>> Please note, that these errors are not related
>>>> to error in creating storage domain. However,
>>>> these errors could prevent you from monitoring
>>>> the state of gluster volume from oVirt
>>>>
>>>>> 2015-09-22 03:57:29,905 WARN
>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>>> (DefaultQuartzScheduler_Worker-53) [] Could
>>>>> not associate brick
>>>>> 'sjcstorage02:/export/vmstore/brick01' of
>>>>> volume '878a316d-2394-4aae-bdf8-e10eea38225e'
>>>>> with correct network as no gluster network
>>>>> found in cluster
>>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>>
>>>>> 2015-09-22 03:57:29,905 WARN
>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>>>>> (DefaultQuartzScheduler_Worker-53) [] Could
>>>>> not add brick
>>>>> 'sjcvhost02:/export/vmstore/brick01' to volume
>>>>> '878a316d-2394-4aae-bdf8-e10eea38225e' -
>>>>> server uuid
>>>>> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not
>>>>> found in cluster
>>>>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>>>>
>>>>> 2015-09-22 03:57:29,905 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>>> (DefaultQuartzScheduler_Worker-53) [] FINISH,
>>>>> GlusterVolumesListVDSCommand, return:
>>>>> {878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
>>>>> log id: 24198fbf
>>>>>
>>>>>
>>>>> I'm new to ovirt and gluster, so any help
>>>>> would be great
>>>>>
>>>>>
>>>>> thanks
>>>>>
>>>>>
>>>>> Brett Stevens
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>>
>>
>>
>
>
>
>
>
--------------070407060509090702050906
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
Hi Brett,<br>
Can you truncate the gluster brick and mount logs on all three
nodes, try creating the storage domain again and then share these
logs along with the VDSM logs? <br>
<br>
i.e. on all 3 nodes, <br>
1. echo > /var/log/glusterfs/<span>rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log</span><br>
2. echo > export-vmstore-brick01.log<br>
3. Create the storage domain (at which point VDSM supposedly fails
with the truncate error)<br>
4. Share the logs.<br>
<br>
Also, what timezone are you in? That would be needed to co-relate
the timestamps in the vdsm log (local time) and gluster log (UTC)<br>
<br>
Thanks!<br>
Ravi<br>
<blockquote cite="mid:560A221C.5030204@redhat.com" type="cite">
<div class="moz-forward-container"> <br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" border="0"
cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Subject:
</th>
<td>Re: [ovirt-users] adding gluster domains</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Date:
</th>
<td>Tue, 29 Sep 2015 08:38:49 +1000</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">From:
</th>
<td>Brett Stevens <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:gorttman@i3sec.com.au"><gorttman(a)i3sec.com.au></a></td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Reply-To:
</th>
<td><a moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:brett@i3sec.com.au">brett(a)i3sec.com.au</a></td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">To: </th>
<td>Sahina Bose <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:sabose@redhat.com"><sabose(a)redhat.com></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<div dir="ltr">Sorry about the delay, I've run the truncate. I'm
not sure what results you were expecting, but it executed
fine, no delays no errors no problems.
<div><br>
</div>
<div>thanks</div>
<div>Brett Stevens</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 24, 2015 at 7:29 PM,
Brett Stevens <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:gorttman@i3sec.com.au" target="_blank">gorttman(a)i3sec.com.au</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Thanks I'll do that tomorrow morning.
<div><br>
</div>
<div>Just out of interest, I keep getting warn errors in
the engine.log allong the lines of node not present
(sjcvhost02 which is the arbiter) and no gluster
network present even after I have added the gluster
network option in the network management gui.</div>
<div><br>
</div>
<div>thanks</div>
<div><br>
</div>
<div>Brett Stevens</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 24, 2015 at 7:26
PM, Sahina Bose <span dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Sorry, I
intended to forward it to a gluster devel.<br>
<br>
Btw, there were no errors in the mount log - so
unable to root cause why truncate of file failed
with IO error. Was the log from vhost03 -
/var/log/glusterfs/<span>rhev-data-center-mnt-glusterSD-sjcstorage01:_vmstore.log
?<br>
<br>
We will look into the logs you attached to see
if there are any errors reported at the bricks.
(But there should have been some error in mount
log!)<br>
<br>
Could you also try "truncate -s 10M test" from
the mount point ( manually mount gluster using -
#mount -t glusterfs </span><span><span>sjcstorage01:/vmstore
<mountpoint>) and report results.</span><br>
</span><br>
<div>On 09/24/2015 02:32 PM, Brett Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina.
<div><br>
</div>
<div>Something has gone wrong with your last
email. I have received a message from you,
but did not get any text to go with it.
could you resend please?</div>
<div><br>
</div>
<div>thanks</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Thu, Sep 24, 2015
at 6:48 PM, Sahina Bose <span dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> <br>
<br>
<div>On 09/24/2015 04:21 AM, Brett
Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina.
<div><br>
</div>
<div>vhost02 is the engine node
vhost03 is the hypervisor
storage01 and 02 the gluster
nodes. I've put arbiter on vhost02</div>
<div><br>
</div>
<div>all tasks are separated (except
engine and arbiter) </div>
<div><br>
</div>
<div>thanks</div>
<div><br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Wed, Sep
23, 2015 at 9:48 PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:sabose@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span> wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"> +
ovirt-users<br>
<br>
Some clarity on your setup - <br>
<span>sjcvhost03 - is this
your arbiter node and ovirt
management node? And are you
running a compute + storage
on the same nodes - i.e, </span><span>sjcstorage01,
</span><span>sjcstorage02, </span><span>sjcvhost03
(arbiter).<br>
<br>
</span><br>
<span>
CreateStorageDomainVDSCommand(HostName
= sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}),
log id: b9fe587<br>
<br>
- fails with </span><span>Error
creating a storage domain's
metadata: ("create meta file
'outbox' failed: [Errno 5]
Input/output error",<br>
<br>
Are the vdsm logs you
provided from </span><span>sjcvhost03?
There are no errors to be
seen in the gluster log you
provided. Could you provide
mount log from </span><span><span>sjcvhost03</span>
(at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log
most likely)<br>
If possible,
/var/log/glusterfs/* from
the 3 storage nodes.<br>
<br>
thanks<br>
sahina<br>
<br>
</span>
<div>On 09/23/2015 05:02 AM,
Brett Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Sahina,
<div><br>
</div>
<div>as requested here is
some logs taken during a
domain create.</div>
<div><br>
</div>
<div>
<p><span>2015-09-22
18:46:44,320 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88)
[] START,
GlusterVolumesListVDSCommand(HostName
= sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id: 2205ff1</span></p>
<p><span>2015-09-22
18:46:44,413 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88)
[] Could not
associate brick
'sjcstorage01:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88)
[] Could not
associate brick
'sjcstorage02:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:44,417 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88)
[] Could not add
brick
'sjcvhost02:/export/vmstore/brick01'
to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
- server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:44,418 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88)
[] FINISH,
GlusterVolumesListVDSCommand,
return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
log id: 2205ff1</span></p>
<p><span>2015-09-22
18:46:45,215 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24)
[5099cda3] Lock
Acquired to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:45,230 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24)
[5099cda3] Running
command:
AddStorageServerConnectionCommand
internal: false.
Entities affected :
ID:
aaa00000-0000-0000-0000-123456789aaa
Type: SystemAction
group
CREATE_STORAGE_DOMAIN
with role type ADMIN</span></p>
<p><span>2015-09-22
18:46:45,233 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24)
[5099cda3] START,
ConnectStorageServerVDSCommand(HostName
= sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='null',
connection='sjcstorage01:/vmstore',
iqn='null',
vfsType='glusterfs',
mountOptions='null',
nfsVersion='null',
nfsRetrans='null',
nfsTimeo='null',
iface='null',
netIfaceName='null'}]'}),
log id: 6a112292</span></p>
<p><span>2015-09-22
18:46:48,065 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24)
[5099cda3] FINISH,
ConnectStorageServerVDSCommand,
return:
{00000000-0000-0000-0000-000000000000=0},
log id: 6a112292</span></p>
<p><span>2015-09-22
18:46:48,073 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24)
[5099cda3] Lock
freed to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:48,188 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Running
command:
AddGlusterFsStorageDomainCommand
internal: false.
Entities affected :
ID:
aaa00000-0000-0000-0000-123456789aaa
Type: SystemAction
group
CREATE_STORAGE_DOMAIN
with role type ADMIN</span></p>
<p><span>2015-09-22
18:46:48,206 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23)
[6410419] START,
ConnectStorageServerVDSCommand(HostName
= sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore',
iqn='null',
vfsType='glusterfs',
mountOptions='null',
nfsVersion='null',
nfsRetrans='null',
nfsTimeo='null',
iface='null',
netIfaceName='null'}]'}),
log id: 38a2b0d</span></p>
<p><span>2015-09-22
18:46:48,219 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23)
[6410419] FINISH,
ConnectStorageServerVDSCommand,
return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0},
log id: 38a2b0d</span></p>
<p><span>2015-09-22
18:46:48,221 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] START,
CreateStorageDomainVDSCommand(HostName
= sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}),
log id: b9fe587</span></p>
<p><span>2015-09-22
18:46:48,744 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23)
[6410419]
Correlation ID:
null, Call Stack:
null, Custom Event
ID: -1, Message:
VDSM sjcvhost03
command failed:
Error creating a
storage domain's
metadata: ("create
meta file 'outbox'
failed: [Errno 5]
Input/output
error",)</span></p>
<p><span>2015-09-22
18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
return value
'StatusOnlyReturnForXmlRpc
[status=StatusForXmlRpc
[code=362,
message=Error
creating a storage
domain's metadata:
("create meta file
'outbox' failed:
[Errno 5]
Input/output
error",)]]'</span></p>
<p><span>2015-09-22
18:46:48,744 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] HostName =
sjcvhost03</span></p>
<p><span>2015-09-22
18:46:48,745 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] Command
'CreateStorageDomainVDSCommand(HostName
= sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'})'
execution failed:
VDSGenericException:
VDSErrorException:
Failed in vdscommand
to
CreateStorageDomainVDS,
error = Error
creating a storage
domain's metadata:
("create meta file
'outbox' failed:
[Errno 5]
Input/output
error",)</span></p>
<p><span>2015-09-22
18:46:48,745 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23)
[6410419] FINISH,
CreateStorageDomainVDSCommand,
log id: b9fe587</span></p>
<p><span>2015-09-22
18:46:48,745 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
failed:
EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException:
VDSErrorException:
Failed in vdscommand
to
CreateStorageDomainVDS,
error = Error
creating a storage
domain's metadata:
("create meta file
'outbox' failed:
[Errno 5]
Input/output
error",) (Failed
with error
StorageDomainMetadataCreationError
and code 362)</span></p>
<p><span>2015-09-22
18:46:48,755 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
snapshot:
597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p><span>2015-09-22
18:46:48,758 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]:
Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot:
597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p><span>2015-09-22
18:46:48,769 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23)
[6410419]
Transaction
rolled-back for
command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.</span></p>
<p><span>2015-09-22
18:46:48,784 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23)
[6410419]
Correlation ID:
6410419, Job ID:
78692780-a06f-49a5-b6b1-e6c24a820d62,
Call Stack: null,
Custom Event ID: -1,
Message: Failed to
add Storage Domain
sjcvmstore. (User:
admin@internal)</span></p>
<p><span>2015-09-22
18:46:48,996 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Lock
Acquired to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:49,018 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Running
command:
RemoveStorageServerConnectionCommand
internal: false.
Entities affected :
ID:
aaa00000-0000-0000-0000-123456789aaa
Type: SystemAction
group
CREATE_STORAGE_DOMAIN
with role type ADMIN</span></p>
<p><span>2015-09-22
18:46:49,024 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Removing
connection
'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e'
from database </span></p>
<p><span>2015-09-22
18:46:49,026 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32)
[1635a244] START,
DisconnectStorageServerVDSCommand(HostName
= sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore',
iqn='null',
vfsType='glusterfs',
mountOptions='null',
nfsVersion='null',
nfsRetrans='null',
nfsTimeo='null',
iface='null',
netIfaceName='null'}]'}),
log id: 39d3b568</span></p>
<p><span>2015-09-22
18:46:49,248 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32)
[1635a244] FINISH,
DisconnectStorageServerVDSCommand,
return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0},
log id: 39d3b568</span></p>
<p><span>2015-09-22
18:46:49,252 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32)
[1635a244] Lock
freed to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p><span>2015-09-22
18:46:49,431 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3)
[] START,
GlusterVolumesListVDSCommand(HostName
= sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id: 17014ae8</span></p>
<p><span>2015-09-22
18:46:49,511 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3)
[] Could not
associate brick
'sjcstorage01:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:49,515 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3)
[] Could not
associate brick
'sjcstorage02:/export/vmstore/brick01'
of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
with correct network
as no gluster
network found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:49,516 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3)
[] Could not add
brick
'sjcvhost02:/export/vmstore/brick01'
to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9'
- server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
18:46:49,516 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3)
[] FINISH,
GlusterVolumesListVDSCommand,
return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
log id: 17014ae8</span></p>
<p><span><br>
</span></p>
<p><span><br>
</span></p>
<p><span>ovirt engine
thinks that
sjcstorage01 is
sjcstorage01, its
all testbed at the
moment and is all
short names, defined
in /etc/hosts (all
copied to each
server for
consistancy)</span></p>
<p><span><br>
</span></p>
<p><span>volume info for
vmstore is</span></p>
<p><span><br>
</span></p>
<p><span>Status of
volume: vmstore</span></p>
<p><span>Gluster process
TCP Port
RDMA Port Online
Pid</span></p>
<p><span>------------------------------------------------------------------------------</span></p>
<p><span>Brick
sjcstorage01:/export/vmstore/brick01
49157 0
Y 7444 </span></p>
<p><span>Brick
sjcstorage02:/export/vmstore/brick01
49157 0
Y 4063 </span></p>
<p><span>Brick
sjcvhost02:/export/vmstore/brick01
49156 0
Y 3243 </span></p>
<p><span>NFS Server on
localhost
2049
0 Y
3268 </span></p>
<p><span>Self-heal
Daemon on localhost
N/A
N/A Y
3284 </span></p>
<p><span>NFS Server on
sjcstorage01
2049
0 Y
7463 </span></p>
<p><span>Self-heal
Daemon on
sjcstorage01
N/A N/A
Y 7472 </span></p>
<p><span>NFS Server on
sjcstorage02
2049
0 Y
4082 </span></p>
<p><span>Self-heal
Daemon on
sjcstorage02
N/A N/A
Y 4090 </span></p>
<p><span> </span></p>
<p><span>Task Status of
Volume vmstore</span></p>
<p><span>------------------------------------------------------------------------------</span></p>
<p> </p>
<p><span>There are no
active volume tasks</span></p>
<p><span><br>
</span></p>
<p><span><br>
</span></p>
<p><span>vdsm logs from
time the domain is
added</span></p>
<p><span><br>
</span></p>
<p>hread-789::DEBUG::2015-09-22
19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
from state init ->
state preparing</p>
<p>Thread-790::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:07,797::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-790::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:07,797::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished:
{}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving
from state preparing
-> state finished</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref
0 aborting False</p>
<p>Thread-790::DEBUG::2015-09-22
19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52510" target="_blank">127.0.0.1:52510</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52510" target="_blank">127.0.0.1:52510</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52510)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52510" target="_blank">127.0.0.1:52510</a></p>
<p>Thread-791::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52510" target="_blank">127.0.0.1:52510</a>
started</p>
<p>Thread-791::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52510" target="_blank">127.0.0.1:52510</a>
stopped</p>
<p>Thread-792::DEBUG::2015-09-22
19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
from state init ->
state preparing</p>
<p>Thread-793::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:22,832::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-793::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:22,832::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished:
{}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving
from state preparing
-> state finished</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref
0 aborting False</p>
<p>Thread-793::DEBUG::2015-09-22
19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52511" target="_blank">127.0.0.1:52511</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52511" target="_blank">127.0.0.1:52511</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52511)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52511" target="_blank">127.0.0.1:52511</a></p>
<p>Thread-794::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52511" target="_blank">127.0.0.1:52511</a>
started</p>
<p>Thread-794::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52511" target="_blank">127.0.0.1:52511</a>
stopped</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StoragePool.connectStorageServer'
in bridge with
{u'connectionParams':
[{u'id':
u'00000000-0000-0000-0000-000000000000',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}],
u'storagepoolID':
u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
from state init ->
state preparing</p>
<p>Thread-795::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,521::logUtils::48::dispatcher::(wrapper)
Run and protect:
connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id':
u'00000000-0000-0000-0000-000000000000',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}], options=None)</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir)
Creating directory:
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
mode: None</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n
/usr/bin/systemd-run
--scope
--slice=vdsm-glusterfs
/usr/bin/mount -t
glusterfs -o
backup-volfile-servers=sjcstorage02:sjcvhost02
sjcstorage01:/vmstore
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
(cwd None)</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath:
glusterSD/*</p>
<p>Thread-796::DEBUG::2015-09-22
19:12:35,707::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p>Thread-797::DEBUG::2015-09-22
19:12:35,712::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs:
{41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p>Thread-795::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,721::logUtils::51::dispatcher::(wrapper)
Run and protect:
connectStorageServer,
Return response:
{'statuslist':
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished:
{'statuslist':
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving
from state preparing
-> state finished</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref
0 aborting False</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return
'StoragePool.connectStorageServer'
in bridge with
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]</p>
<p>Thread-795::DEBUG::2015-09-22
19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StoragePool.connectStorageServer'
in bridge with
{u'connectionParams':
[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}],
u'storagepoolID':
u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
from state init ->
state preparing</p>
<p>Thread-798::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,776::logUtils::48::dispatcher::(wrapper)
Run and protect:
connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}], options=None)</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath:
glusterSD/*</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs:
{41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p>Thread-798::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,782::logUtils::51::dispatcher::(wrapper)
Run and protect:
connectStorageServer,
Return response:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving
from state preparing
-> state finished</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref
0 aborting False</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return
'StoragePool.connectStorageServer'
in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p>Thread-798::DEBUG::2015-09-22
19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StorageDomain.create'
in bridge with
{u'name':
u'sjcvmstore01',
u'domainType': 7,
u'domainClass': 1,
u'typeArgs':
u'sjcstorage01:/vmstore',
u'version': u'3',
u'storagedomainID':
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
from state init ->
state preparing</p>
<p>Thread-801::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,788::logUtils::48::dispatcher::(wrapper)
Run and protect:
createStorageDomain(storageType=7,
sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
domainName=u'sjcvmstore01',
typeSpecificArg=u'sjcstorage01:/vmstore',
domClass=1,
domVersion=u'3',
options=None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.sdc.refreshStorage)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.iscsi.rescan)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsi::431::Storage.ISCSI::(rescan)
Performing SCSI scan,
this will take up to
30 seconds</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n
/sbin/iscsiadm -m
session -R (cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.hba.rescan)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,821::hba::56::Storage.HBA::(rescan)
Starting scan</p>
<p>Thread-802::DEBUG::2015-09-22
19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,912::hba::62::Storage.HBA::(rescan)
Scan finished</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,912::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n
/usr/sbin/multipath
(cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> =
''; <rc> = 0</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,936::utils::661::root::(execCmd)
/sbin/udevadm settle
--timeout=5 (cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,946::utils::679::root::(execCmd)
SUCCESS: <err> =
''; <rc> = 0</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,948::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
looking for unfetched
domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p>Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload
operation' got the
operation mutex</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd)
/usr/bin/sudo -n
/usr/sbin/lvm vgs
--config ' devices {
preferred_names =
["^/dev/mapper/"]
ignore_suspended_devices=1
write_cache_state=0
disable_after_error_count=3
obtain_device_list_from_udev=0
filter = [
'\''r|.*|'\'' ] }
global {
locking_type=1
prioritise_write_locks=1
wait_for_locks=1
use_lvmetad=0 }
backup { retain_min
= 50 retain_days = 0
} ' --noheadings
--units b --nosuffix
--separator '|'
--ignoreskippedcluster
-o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
(cwd None)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd)
FAILED: <err> =
' WARNING: lvmetad is
running but disabled.
Restart lvmetad before
enabling it!\n Volume
group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
not found\n Cannot
process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n';
<rc> = 5</p>
<p>Thread-801::WARNING::2015-09-22
19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs)
lvm vgs failed: 5 []
[' WARNING: lvmetad
is running but
disabled. Restart
lvmetad before
enabling it!', '
Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
not found', ' Cannot
process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload
operation' released
the operation mutex</p>
<p>Thread-801::ERROR::2015-09-22
19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
not found</p>
<p>Traceback (most
recent call last):</p>
<p> File
"/usr/share/vdsm/storage/sdc.py",
line 142, in
_findDomain</p>
<p> dom =
findMethod(sdUUID)</p>
<p> File
"/usr/share/vdsm/storage/sdc.py",
line 172, in
_findUnfetchedDomain</p>
<p> raise
se.StorageDomainDoesNotExist(sdUUID)</p>
<p>StorageDomainDoesNotExist:
Storage domain does
not exist:
(u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)</p>
<p>Thread-801::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
domainName=sjcvmstore01
remotePath=sjcstorage01:/vmstore
domClass=1</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,015::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p>Thread-801::ERROR::2015-09-22
19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected
error</p>
<p>Traceback (most
recent call last):</p>
<p> File
"/usr/share/vdsm/storage/task.py",
line 873, in _run</p>
<p> return fn(*args,
**kargs)</p>
<p> File
"/usr/share/vdsm/logUtils.py",
line 49, in wrapper</p>
<p> res = f(*args,
**kwargs)</p>
<p> File
"/usr/share/vdsm/storage/hsm.py",
line 2697, in
createStorageDomain</p>
<p> domVersion)</p>
<p> File
"/usr/share/vdsm/storage/nfsSD.py",
line 84, in create</p>
<p> remotePath,
storageType, version)</p>
<p> File
"/usr/share/vdsm/storage/fileSD.py",
line 264, in
_prepareMetadata</p>
<p> "create meta file
'%s' failed: %s" %
(metaFile, str(e)))</p>
<p>StorageDomainMetadataCreationError:
Error creating a
storage domain's
metadata: ("create
meta file 'outbox'
failed: [Errno 5]
Input/output error",)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
d2d29352-8677-45cb-a4ab-06aa32cf1acb
(7,
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
u'sjcvmstore01',
u'sjcstorage01:/vmstore',
1, u'3') {} failed -
stopping task</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping
in state preparing
(force False)</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref
1 aborting True</p>
<p>Thread-801::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting:
Task is aborted:
"Error creating a
storage domain's
metadata" - code 362</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare:
aborted: Error
creating a storage
domain's metadata</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref
0 aborting True</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort:
force False</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
from state preparing
-> state aborting</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting:
recover policy none</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving
from state aborting
-> state failed</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-801::ERROR::2015-09-22
19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message':
'Error creating a
storage domain\'s
metadata: ("create
meta file \'outbox\'
failed: [Errno 5]
Input/output
error",)', 'code':
362}}</p>
<p>Thread-801::DEBUG::2015-09-22
19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling
'StoragePool.disconnectStorageServer'
in bridge with
{u'connectionParams':
[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}],
u'storagepoolID':
u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
from state init ->
state preparing</p>
<p>Thread-807::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:36,182::logUtils::48::dispatcher::(wrapper)
Run and protect:
disconnectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection':
u'sjcstorage01:/vmstore',
u'iqn': u'', u'user':
u'', u'tpgt': u'1',
u'vfs_type':
u'glusterfs',
u'password':
'********', u'port':
u''}], options=None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n
/usr/bin/umount -f -l
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore
(cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.sdc.refreshStorage)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.iscsi.rescan)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsi::431::Storage.ISCSI::(rescan)
Performing SCSI scan,
this will take up to
30 seconds</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n
/sbin/iscsiadm -m
session -R (cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter
sampling method
(storage.hba.rescan)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling
method</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,258::hba::56::Storage.HBA::(rescan)
Starting scan</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,350::hba::62::Storage.HBA::(rescan)
Scan finished</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,350::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n
/usr/sbin/multipath
(cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> =
''; <rc> = 0</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,374::utils::661::root::(execCmd)
/sbin/udevadm settle
--timeout=5 (cwd None)</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,383::utils::679::root::(execCmd)
SUCCESS: <err> =
''; <rc> = 0</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
got the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm
invalidate operation'
released the operation
mutex</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,386::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p>Thread-807::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:36,386::logUtils::51::dispatcher::(wrapper)
Run and protect:
disconnectStorageServer,
Return response:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished:
{'statuslist':
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving
from state preparing
-> state finished</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref
0 aborting False</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return
'StoragePool.disconnectStorageServer'
in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p>Thread-807::DEBUG::2015-09-22
19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
from state init ->
state preparing</p>
<p>Thread-808::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:37,868::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-808::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:37,868::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished:
{}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving
from state preparing
-> state finished</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref
0 aborting False</p>
<p>Thread-808::DEBUG::2015-09-22
19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52512" target="_blank">127.0.0.1:52512</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52512" target="_blank">127.0.0.1:52512</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52512)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52512" target="_blank">127.0.0.1:52512</a></p>
<p>Thread-809::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52512" target="_blank">127.0.0.1:52512</a>
started</p>
<p>Thread-809::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52512" target="_blank">127.0.0.1:52512</a>
stopped</p>
<p>Thread-810::DEBUG::2015-09-22
19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
from state init ->
state preparing</p>
<p>Thread-811::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:52,902::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-811::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:52,902::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished:
{}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving
from state preparing
-> state finished</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref
0 aborting False</p>
<p>Thread-811::DEBUG::2015-09-22
19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52513" target="_blank">127.0.0.1:52513</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52513" target="_blank">127.0.0.1:52513</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52513)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52513" target="_blank">127.0.0.1:52513</a></p>
<p>Thread-812::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52513" target="_blank">127.0.0.1:52513</a>
started</p>
<p>Thread-812::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52513" target="_blank">127.0.0.1:52513</a>
stopped</p>
<p>Thread-813::DEBUG::2015-09-22
19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
from state init ->
state preparing</p>
<p>Thread-814::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:07,935::logUtils::48::dispatcher::(wrapper)
Run and protect:
repoStats(options=None)</p>
<p>Thread-814::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:07,935::logUtils::51::dispatcher::(wrapper)
Run and protect:
repoStats, Return
response: {}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished:
{}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving
from state preparing
-> state finished</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll
requests {} resources
{}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref
0 aborting False</p>
<p>Thread-814::DEBUG::2015-09-22
19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52515" target="_blank">127.0.0.1:52515</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p>Reactor thread::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml
from <a
moz-do-not-send="true"
href="http://127.0.0.1:52515" target="_blank">127.0.0.1:52515</a></p>
<p>Reactor
thread::DEBUG::2015-09-22
19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected
from ('127.0.0.1',
52515)</p>
<p>BindingXMLRPC::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request
handler for <a
moz-do-not-send="true"
href="http://127.0.0.1:52515" target="_blank">127.0.0.1:52515</a></p>
<p>Thread-815::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52515" target="_blank">127.0.0.1:52515</a>
started</p>
<p>Thread-815::<a
moz-do-not-send="true"
class="moz-txt-link-freetext" href="INFO::2015-09-22"><a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a></a>
19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a
moz-do-not-send="true" href="http://127.0.0.1:52515" target="_blank">127.0.0.1:52515</a>
stopped</p>
<p><span></span></p>
<p>Thread-816::DEBUG::2015-09-22
19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
</div>
<div><br>
</div>
<div><br>
</div>
<div>gluster logs</div>
<div><br>
</div>
<div>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span> 1: volume
vmstore-client-0</span></p>
<p><span> 2: type
protocol/client</span></p>
<p><span> 3: option
ping-timeout 42</span></p>
<p><span> 4: option
remote-host
sjcstorage01</span></p>
<p><span> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 6: option
transport-type
socket</span></p>
<p><span> 7: option
send-gids true</span></p>
<p><span> 8: end-volume</span></p>
<p><span> 9: </span></p>
<p><span> 10: volume
vmstore-client-1</span></p>
<p><span> 11: type
protocol/client</span></p>
<p><span> 12: option
ping-timeout 42</span></p>
<p><span> 13: option
remote-host
sjcstorage02</span></p>
<p><span> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 15: option
transport-type
socket</span></p>
<p><span> 16: option
send-gids true</span></p>
<p><span> 17: end-volume</span></p>
<p><span> 18: </span></p>
<p><span> 19: volume
vmstore-client-2</span></p>
<p><span> 20: type
protocol/client</span></p>
<p><span> 21: option
ping-timeout 42</span></p>
<p><span> 22: option
remote-host
sjcvhost02</span></p>
<p><span> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 24: option
transport-type
socket</span></p>
<p><span> 25: option
send-gids true</span></p>
<p><span> 26: end-volume</span></p>
<p><span> 27: </span></p>
<p><span> 28: volume
vmstore-replicate-0</span></p>
<p><span> 29: type
cluster/replicate</span></p>
<p><span> 30: option
arbiter-count 1</span></p>
<p><span> 31:
subvolumes
vmstore-client-0
vmstore-client-1
vmstore-client-2</span></p>
<p><span> 32: end-volume</span></p>
<p><span> 33: </span></p>
<p><span> 34: volume
vmstore-dht</span></p>
<p><span> 35: type
cluster/distribute</span></p>
<p><span> 36:
subvolumes
vmstore-replicate-0</span></p>
<p><span> 37: end-volume</span></p>
<p><span> 38: </span></p>
<p><span> 39: volume
vmstore-write-behind</span></p>
<p><span> 40: type
performance/write-behind</span></p>
<p><span> 41:
subvolumes
vmstore-dht</span></p>
<p><span> 42: end-volume</span></p>
<p><span> 43: </span></p>
<p><span> 44: volume
vmstore-read-ahead</span></p>
<p><span> 45: type
performance/read-ahead</span></p>
<p><span> 46:
subvolumes
vmstore-write-behind</span></p>
<p><span> 47: end-volume</span></p>
<p><span> 48: </span></p>
<p><span> 49: volume
vmstore-readdir-ahead</span></p>
<p><span> 50: type
performance/readdir-ahead</span></p>
<p><span>52: end-volume</span></p>
<p><span> 53: </span></p>
<p><span> 54: volume
vmstore-io-cache</span></p>
<p><span> 55: type
performance/io-cache</span></p>
<p><span> 56:
subvolumes
vmstore-readdir-ahead</span></p>
<p><span> 57: end-volume</span></p>
<p><span> 58: </span></p>
<p><span> 59: volume
vmstore-quick-read</span></p>
<p><span> 60: type
performance/quick-read</span></p>
<p><span> 61:
subvolumes
vmstore-io-cache</span></p>
<p><span> 62: end-volume</span></p>
<p><span> 63: </span></p>
<p><span> 64: volume
vmstore-open-behind</span></p>
<p><span> 65: type
performance/open-behind</span></p>
<p><span> 66:
subvolumes
vmstore-quick-read</span></p>
<p><span> 67: end-volume</span></p>
<p><span> 68: </span></p>
<p><span> 69: volume
vmstore-md-cache</span></p>
<p><span> 70: type
performance/md-cache</span></p>
<p><span> 71:
subvolumes
vmstore-open-behind</span></p>
<p><span> 72: end-volume</span></p>
<p><span> 73: </span></p>
<p><span> 74: volume
vmstore</span></p>
<p><span> 75: type
debug/io-stats</span></p>
<p><span> 76: option
latency-measurement
off</span></p>
<p><span> 77: option
count-fop-hits off</span></p>
<p><span> 78:
subvolumes
vmstore-md-cache</span></p>
<p><span> 79: end-volume</span></p>
<p><span> 80: </span></p>
<p><span> 81: volume
meta-autoload</span></p>
<p><span> 82: type
meta</span></p>
<p><span> 83:
subvolumes vmstore</span></p>
<p><span> 84: end-volume</span></p>
<p><span> 85: </span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span>[2015-09-22
05:29:07.586205] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:29:07.586325] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:29:07.586480] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:29:07.595052] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:29:07.595397] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:29:07.595576] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:29:07.595721] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0:
Connected to
vmstore-client-0,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:29:07.595738] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:29:07.596044] I
[MSGID: 108005]
[afr-common.c:3998:afr_notify]
0-vmstore-replicate-0:
Subvolume
'vmstore-client-0'
came back up; going
online.</span></p>
<p><span>[2015-09-22
05:29:07.596170] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1:
Connected to
vmstore-client-1,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span> </span></p>
<p><span>[2015-09-22
05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume :</span></p>
<p><span>[2015-09-22
05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:29:07.596506] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:29:07.608758] I
[fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to
graph 0</span></p>
<p><span>[2015-09-22
05:29:07.608910] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:29:07.608936] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:29:07.608950] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:29:07.609695] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 2</span></p>
<p><span>[2015-09-22
05:29:07.609868] I
[fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse:
FUSE inited with
protocol versions:
glusterfs 7.22
kernel 7.22</span></p>
<p><span>[2015-09-22
05:29:07.616577] I
[MSGID: 109063]
[dht-layout.c:702:dht_layout_normalize]
0-vmstore-dht: Found
anomalies in / (gfid
=
00000000-0000-0000-0000-000000000001).
Holes=1 overlaps=0</span></p>
<p><span>[2015-09-22
05:29:07.620230] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht:
Setting layout of /
with [Subvol_name:
vmstore-replicate-0,
Err: -1 , Start: 0 ,
Stop: 4294967295 ,
Hash: 1 ], </span></p>
<p><span>[2015-09-22
05:29:08.122415] W
[fuse-bridge.c:1230:fuse_err_cbk]
0-glusterfs-fuse:
26: REMOVEXATTR()
/__DIRECT_IO_TEST__
=> -1 (No data
available)</span></p>
<p><span>[2015-09-22
05:29:08.<a
moz-do-not-send="true"
href="tel:137359"
value="+61137359"
target="_blank">137359</a>]
I [MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht:
Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2
with [Subvol_name:
vmstore-replicate-0,
Err: -1 , Start: 0 ,
Stop: 4294967295 ,
Hash: 1 ], </span></p>
<p><span>[2015-09-22
05:29:08.145835] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht:
Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md
with [Subvol_name:
vmstore-replicate-0,
Err: -1 , Start: 0 ,
Stop: 4294967295 ,
Hash: 1 ], </span></p>
<p><span>[2015-09-22
05:30:57.897819] I
[MSGID: 100030]
[glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs:
Started running
/usr/sbin/glusterfs
version 3.7.4 (args:
/usr/sbin/glusterfs
--volfile-server=sjcvhost02
--volfile-server=sjcstorage01
--volfile-server=sjcstorage02
--volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p><span>[2015-09-22
05:30:57.909889] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 1</span></p>
<p><span>[2015-09-22
05:30:57.923087] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-0:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:30:57.925701] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-1:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:30:57.927984] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-2:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>Final graph:</span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span> 1: volume
vmstore-client-0</span></p>
<p><span> 2: type
protocol/client</span></p>
<p><span> 3: option
ping-timeout 42</span></p>
<p><span> 4: option
remote-host
sjcstorage01</span></p>
<p><span> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 6: option
transport-type
socket</span></p>
<p><span> 7: option
send-gids true</span></p>
<p><span> 8: end-volume</span></p>
<p><span> 9: </span></p>
<p><span> 10: volume
vmstore-client-1</span></p>
<p><span> 11: type
protocol/client</span></p>
<p><span> 12: option
ping-timeout 42</span></p>
<p><span> 13: option
remote-host
sjcstorage02</span></p>
<p><span> </span></p>
<p><span> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 15: option
transport-type
socket</span></p>
<p><span> 16: option
send-gids true</span></p>
<p><span> 17: end-volume</span></p>
<p><span> 18: </span></p>
<p><span> 19: volume
vmstore-client-2</span></p>
<p><span> 20: type
protocol/client</span></p>
<p><span> 21: option
ping-timeout 42</span></p>
<p><span> 22: option
remote-host
sjcvhost02</span></p>
<p><span> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 24: option
transport-type
socket</span></p>
<p><span> 25: option
send-gids true</span></p>
<p><span> 26: end-volume</span></p>
<p><span> 27: </span></p>
<p><span> 28: volume
vmstore-replicate-0</span></p>
<p><span> 29: type
cluster/replicate</span></p>
<p><span> 30: option
arbiter-count 1</span></p>
<p><span> 31:
subvolumes
vmstore-client-0
vmstore-client-1
vmstore-client-2</span></p>
<p><span> 32: end-volume</span></p>
<p><span> 33: </span></p>
<p><span> 34: volume
vmstore-dht</span></p>
<p><span> 35: type
cluster/distribute</span></p>
<p><span> 36:
subvolumes
vmstore-replicate-0</span></p>
<p><span> 37: end-volume</span></p>
<p><span> 38: </span></p>
<p><span> 39: volume
vmstore-write-behind</span></p>
<p><span> 40: type
performance/write-behind</span></p>
<p><span> 41:
subvolumes
vmstore-dht</span></p>
<p><span> 42: end-volume</span></p>
<p><span> 43: </span></p>
<p><span> 44: volume
vmstore-read-ahead</span></p>
<p><span> 45: type
performance/read-ahead</span></p>
<p><span> 46:
subvolumes
vmstore-write-behind</span></p>
<p><span> 47: end-volume</span></p>
<p><span> 48: </span></p>
<p><span> 49: volume
vmstore-readdir-ahead</span></p>
<p><span> 50: type
performance/readdir-ahead</span></p>
<p><span> 51:
subvolumes
vmstore-read-ahead</span></p>
<p><span> 52: end-volume</span></p>
<p><span> 53: </span></p>
<p><span> 54: volume
vmstore-io-cache</span></p>
<p><span> 55: type
performance/io-cache</span></p>
<p><span> 56:
subvolumes
vmstore-readdir-ahead</span></p>
<p><span> 57: end-volume</span></p>
<p><span> 58: </span></p>
<p><span> 59: volume
vmstore-quick-read</span></p>
<p><span> 60: type
performance/quick-read</span></p>
<p><span> 61:
subvolumes
vmstore-io-cache</span></p>
<p><span> 62: end-volume</span></p>
<p><span> 63: </span></p>
<p><span> 64: volume
vmstore-open-behind</span></p>
<p><span> 65: type
performance/open-behind</span></p>
<p><span> 66:
subvolumes
vmstore-quick-read</span></p>
<p><span> 67: end-volume</span></p>
<p><span> 68: </span></p>
<p><span> 69: volume
vmstore-md-cache</span></p>
<p><span> </span></p>
<p><span> 70: type
performance/md-cache</span></p>
<p><span> 71:
subvolumes
vmstore-open-behind</span></p>
<p><span> 72: end-volume</span></p>
<p><span> 73: </span></p>
<p><span> 74: volume
vmstore</span></p>
<p><span> 75: type
debug/io-stats</span></p>
<p><span> 76: option
latency-measurement
off</span></p>
<p><span> 77: option
count-fop-hits off</span></p>
<p><span> 78:
subvolumes
vmstore-md-cache</span></p>
<p><span> 79: end-volume</span></p>
<p><span> 80: </span></p>
<p><span> 81: volume
meta-autoload</span></p>
<p><span> 82: type
meta</span></p>
<p><span> 83:
subvolumes vmstore</span></p>
<p><span> 84: end-volume</span></p>
<p><span> 85: </span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span>[2015-09-22
05:30:57.934021] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:30:57.934145] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:30:57.934491] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:30:57.942198] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:30:57.942545] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:30:57.942659] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:30:57.942797] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0:
Connected to
vmstore-client-0,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:30:57.942808] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:30:57.943036] I
[MSGID: 108005]
[afr-common.c:3998:afr_notify]
0-vmstore-replicate-0:
Subvolume
'vmstore-client-0'
came back up; going
online.</span></p>
<p><span>[2015-09-22
05:30:57.943078] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1:
Connected to
vmstore-client-1,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:30:57.943086] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:30:57.943292] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:30:57.943302] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:30:57.953887] I
[fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to
graph 0</span></p>
<p><span>[2015-09-22
05:30:57.954071] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:30:57.954105] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:30:57.954124] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:30:57.955282] I
[fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse:
FUSE inited with
protocol versions:
glusterfs 7.22
kernel 7.22</span></p>
<p><span>[2015-09-22
05:30:57.955738] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 2</span></p>
<p><span>[2015-09-22
05:30:57.970232] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p><span>[2015-09-22
05:30:57.970834] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5)
[0x7f187139fdf5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f1872a09785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f1872a09609] )
0-: received signum
(15), shutting down</span></p>
<p><span>[2015-09-22
05:30:57.970848] I
[fuse-bridge.c:5595:fini]
0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p><span>[2015-09-22
05:30:58.420973] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p><span>[2015-09-22
05:30:58.421355] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5)
[0x7f8267cd4df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f826933e785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f826933e609] )
0-: received signum
(15), shutting down</span></p>
<p><span>[2015-09-22
05:30:58.421369] I
[fuse-bridge.c:5595:fini]
0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p><span>[2015-09-22
05:31:09.534410] I
[MSGID: 100030]
[glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs:
Started running
/usr/sbin/glusterfs
version 3.7.4 (args:
/usr/sbin/glusterfs
--volfile-server=sjcvhost02
--volfile-server=sjcstorage01
--volfile-server=sjcstorage02
--volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p><span>[2015-09-22
05:31:09.545686] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 1</span></p>
<p><span>[2015-09-22
05:31:09.553019] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-0:
parent translators
are ready,
attempting connect
on transport</span></p>
<p> </p>
<p><span>[2015-09-22
05:31:09.555552] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-1:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:31:09.557989] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-2:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>Final graph:</span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span> 1: volume
vmstore-client-0</span></p>
<p><span> 2: type
protocol/client</span></p>
<p><span> 3: option
ping-timeout 42</span></p>
<p><span> 4: option
remote-host
sjcstorage01</span></p>
<p><span> 5: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 6: option
transport-type
socket</span></p>
<p><span> 7: option
send-gids true</span></p>
<p><span> 8: end-volume</span></p>
<p><span> 9: </span></p>
<p><span> 10: volume
vmstore-client-1</span></p>
<p><span> 11: type
protocol/client</span></p>
<p><span> 12: option
ping-timeout 42</span></p>
<p><span> 13: option
remote-host
sjcstorage02</span></p>
<p><span> 14: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 15: option
transport-type
socket</span></p>
<p><span> 16: option
send-gids true</span></p>
<p><span> 17: end-volume</span></p>
<p><span> 18: </span></p>
<p><span> 19: volume
vmstore-client-2</span></p>
<p><span> 20: type
protocol/client</span></p>
<p><span> 21: option
ping-timeout 42</span></p>
<p><span> 22: option
remote-host
sjcvhost02</span></p>
<p><span> 23: option
remote-subvolume
/export/vmstore/brick01</span></p>
<p><span> 24: option
transport-type
socket</span></p>
<p><span> 25: option
send-gids true</span></p>
<p><span> 26: end-volume</span></p>
<p><span> 27: </span></p>
<p><span> 28: volume
vmstore-replicate-0</span></p>
<p><span> 29: type
cluster/replicate</span></p>
<p><span> 30: option
arbiter-count 1</span></p>
<p><span> 31:
subvolumes
vmstore-client-0
vmstore-client-1
vmstore-client-2</span></p>
<p><span> 32: end-volume</span></p>
<p><span> 33: </span></p>
<p><span> 34: volume
vmstore-dht</span></p>
<p><span> 35: type
cluster/distribute</span></p>
<p><span> 36:
subvolumes
vmstore-replicate-0</span></p>
<p><span> 37: end-volume</span></p>
<p><span> 38: </span></p>
<p><span> 39: volume
vmstore-write-behind</span></p>
<p><span> 40: type
performance/write-behind</span></p>
<p><span> 41:
subvolumes
vmstore-dht</span></p>
<p><span> 42: end-volume</span></p>
<p><span> 43: </span></p>
<p><span> 44: volume
vmstore-read-ahead</span></p>
<p><span> 45: type
performance/read-ahead</span></p>
<p><span> 46:
subvolumes
vmstore-write-behind</span></p>
<p><span> 47: end-volume</span></p>
<p><span> 48: </span></p>
<p><span> 49: volume
vmstore-readdir-ahead</span></p>
<p><span> 50: type
performance/readdir-ahead</span></p>
<p><span> 51:
subvolumes
vmstore-read-ahead</span></p>
<p> </p>
<p><span> 52: end-volume</span></p>
<p><span> 53: </span></p>
<p><span> 54: volume
vmstore-io-cache</span></p>
<p><span> 55: type
performance/io-cache</span></p>
<p><span> 56:
subvolumes
vmstore-readdir-ahead</span></p>
<p><span> 57: end-volume</span></p>
<p><span> 58: </span></p>
<p><span> 59: volume
vmstore-quick-read</span></p>
<p><span> 60: type
performance/quick-read</span></p>
<p><span> 61:
subvolumes
vmstore-io-cache</span></p>
<p><span> 62: end-volume</span></p>
<p><span> 63: </span></p>
<p><span> 64: volume
vmstore-open-behind</span></p>
<p><span> 65: type
performance/open-behind</span></p>
<p><span> 66:
subvolumes
vmstore-quick-read</span></p>
<p><span> 67: end-volume</span></p>
<p><span> 68: </span></p>
<p><span> 69: volume
vmstore-md-cache</span></p>
<p><span> 70: type
performance/md-cache</span></p>
<p><span> 71:
subvolumes
vmstore-open-behind</span></p>
<p><span> 72: end-volume</span></p>
<p><span> 73: </span></p>
<p><span> 74: volume
vmstore</span></p>
<p><span> 75: type
debug/io-stats</span></p>
<p><span> 76: option
latency-measurement
off</span></p>
<p><span> 77: option
count-fop-hits off</span></p>
<p><span> 78:
subvolumes
vmstore-md-cache</span></p>
<p><span> 79: end-volume</span></p>
<p><span> 80: </span></p>
<p><span> 81: volume
meta-autoload</span></p>
<p><span> 82: type
meta</span></p>
<p><span> 83:
subvolumes vmstore</span></p>
<p><span> 84: end-volume</span></p>
<p><span> 85: </span></p>
<p><span>+------------------------------------------------------------------------------+</span></p>
<p><span>[2015-09-22
05:31:09.563262] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-0:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:31:09.563431] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-1:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:31:09.563877] I
[rpc-clnt.c:1851:rpc_clnt_reconfig]
0-vmstore-client-2:
changing port to
49153 (from 0)</span></p>
<p><span>[2015-09-22
05:31:09.572443] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:31:09.572599] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:31:09.572742] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2:
Using Program
GlusterFS 3.3, Num
(1298437), Version
(330)</span></p>
<p><span>[2015-09-22
05:31:09.573165] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1:
Connected to
vmstore-client-1,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:31:09.573186] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:31:09.573395] I
[MSGID: 108005]
[afr-common.c:3998:afr_notify]
0-vmstore-replicate-0:
Subvolume
'vmstore-client-1'
came back up; going
online.</span></p>
<p><span>[2015-09-22
05:31:09.573427] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0:
Connected to
vmstore-client-0,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p><span>[2015-09-22
05:31:09.573435] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0:
Server and Client
lk-version numbers
are not same,
reopening the fds</span></p>
<p><span>[2015-09-22
05:31:09.573754] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2:
Connected to
vmstore-client-2,
attached to remote
volume
'/export/vmstore/brick01'.</span></p>
<p> </p>
<p><span>[2015-09-22
05:31:09.573783] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2:
Server and Client
lk-version numbers
are not same,
reopen:</span></p>
<p><span>[2015-09-22
05:31:09.577192] I
[fuse-bridge.c:5053:fuse_graph_setup]
0-fuse: switched to
graph 0</span></p>
<p><span>[2015-09-22
05:31:09.577302] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:31:09.577325] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:31:09.577339] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2:
Server lk version =
1</span></p>
<p><span>[2015-09-22
05:31:09.578125] I
[fuse-bridge.c:3979:fuse_init]
0-glusterfs-fuse:
FUSE inited with
protocol versions:
glusterfs 7.22
kernel 7.22</span></p>
<p><span>[2015-09-22
05:31:09.578636] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 2</span></p>
<p><span>[2015-09-22
05:31:10.073698] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p><span>[2015-09-22
05:31:10.073977] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5)
[0x7f6b9ba88df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f6b9d0f2785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f6b9d0f2609] )
0-: received signum
(15), shutting down</span></p>
<p><span>[2015-09-22
05:31:10.073993] I
[fuse-bridge.c:5595:fini]
0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p><span>[2015-09-22
05:31:20.184700] I
[MSGID: 100030]
[glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs:
Started running
/usr/sbin/glusterfs
version 3.7.4 (args:
/usr/sbin/glusterfs
--volfile-server=sjcvhost02
--volfile-server=sjcstorage01
--volfile-server=sjcstorage02
--volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p><span>[2015-09-22
05:31:20.194928] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker]
0-epoll: Started
thread with index 1</span></p>
<p><span>[2015-09-22
05:31:20.200701] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-0:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:31:20.203110] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-1:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span>[2015-09-22
05:31:20.205708] I
[MSGID: 114020]
[client.c:2118:notify]
0-vmstore-client-2:
parent translators
are ready,
attempting connect
on transport</span></p>
<p><span> </span></p>
<p><span>Final graph:</span></p>
<p><span><br>
</span></p>
<p><span><br>
</span></p>
<p><span>Hope this
helps. </span></p>
<p><span><br>
</span></p>
<p>thanks again</p>
<p><br>
</p>
<p>Brett Stevens</p>
<p><br>
</p>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On
Tue, Sep 22, 2015 at
10:14 PM, Sahina Bose <span
dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:sabose@redhat.com">sabose(a)redhat.com</a></a>></span>
wrote:<br>
<blockquote
class="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px
#ccc
solid;padding-left:1ex">
<div text="#000000"
bgcolor="#FFFFFF"><span>
<br>
<br>
<div>On 09/22/2015
02:17 PM, Brett
Stevens wrote:<br>
</div>
<blockquote
type="cite">
<div dir="ltr">Hi.
First time on
the lists.
I've searched
for this but
no luck so
sorry if this
has been
covered
before.
<div><br>
</div>
<div>Im
working with
the latest 3.6
beta with the
following
infrastructure. </div>
<div><br>
</div>
<div>1
management
host (to be
used for a
number of
tasks so chose
not to use
self hosted,
we are a
school and
will need to
keep an eye on
hardware
costs)</div>
<div>2 compute
nodes</div>
<div>2 gluster
nodes</div>
<div><br>
</div>
<div>so far
built one
gluster volume
using the
gluster cli to
give me 2
nodes and one
arbiter node
(management
host)</div>
<div><br>
</div>
<div>so far,
every time I
create a
volume, it
shows up
strait away on
the ovirt gui.
however no
matter what I
try, I cannot
create or
import it as a
data domain. </div>
<div><br>
</div>
<div>the
current error
in the ovirt
gui is "Error
while
executing
action
AddGlusterFsStorageDomain:
Error creating
a storage
domain's
metadata"</div>
</div>
</blockquote>
<br>
</span> Please
provide vdsm and
gluster logs<span><br>
<br>
<blockquote
type="cite">
<div dir="ltr">
<div><br>
</div>
<div>logs,
continuously
rolling the
following
errors around</div>
<div>
<p><span>Scheduler_Worker-53)
[] START,
GlusterVolumesListVDSCommand(HostName
=
sjcstorage02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id:
24198fbf</span></p>
<p><span>2015-09-22
03:57:29,903
WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53)
[] Could not
associate
brick
'sjcstorage01:/export/vmstore/brick01'
of volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
with correct
network as no
gluster
network found
in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
</div>
</div>
</blockquote>
<br>
</span> What is the
hostname provided in
ovirt engine for <span>sjcstorage01
? Does this host
have multiple
nics?<br>
<br>
Could you provide
output of gluster
volume info?<br>
Please note, that
these errors are
not related to
error in creating
storage domain.
However, these
errors could
prevent you from
monitoring the
state of gluster
volume from oVirt<br>
<br>
</span>
<blockquote
type="cite"><span>
<div dir="ltr">
<div>
<p><span>2015-09-22
03:57:29,905
WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53)
[] Could not
associate
brick
'sjcstorage02:/export/vmstore/brick01'
of volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
with correct
network as no
gluster
network found
in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
03:57:29,905
WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53)
[] Could not
add brick
'sjcvhost02:/export/vmstore/brick01'
to volume
'878a316d-2394-4aae-bdf8-e10eea38225e'
- server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba'
not found in
cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22
03:57:29,905
INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-53)
[] FINISH,
GlusterVolumesListVDSCommand,
return:
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
log id:
24198fbf</span></p>
<p><span><br>
</span></p>
<p><span>I'm
new to ovirt
and gluster,
so any help
would be great</span></p>
<p><span><br>
</span></p>
<p><span>thanks</span></p>
<p><span><br>
</span></p>
<p><span>Brett
Stevens</span></p>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</span><span>
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</span></blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
</div>
<br>
</div>
<br>
</div>
<br>
</blockquote>
<br>
</body>
</html>
--------------070407060509090702050906--
9 years, 1 month
NAT in oVirt
by lof yer
I am using oVirt 3.5 and have configured NAT by extnet hooks.
I'm suffering this situation:
vm: 192.168.122.160
host: 192.168.0.120
gateway: 192.168.0.1
vm can reach outside network but cannot reach host.
I've diff an original libvirt vm and a ovirt based vm but I cannot see any
obvious difference.
Please help me out of this...
Thank you very much.
This is the NAT configuration.
<network>
<name>default</name>
<uuid>ea0eb0cf-b507-451c-9f0d-919675ea7d8a</uuid>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:c1:18:e4'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
9 years, 1 month
security alerts from host node
by Robert Story
--Sig_/bD5l5X52ZlZIy9ylmDyZBL2
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
I'm getting hundreds of email messages from one of my hosts, several per
minute, with a subject of "*** SECURITY information for ov1.example ***
vdsm : problem with defaults entries ; TTY=3Dunknown ; PWD=3D/ ;
Any ideas on how I can fix this?
Robert
--=20
Senior Software Engineer @ Parsons
--Sig_/bD5l5X52ZlZIy9ylmDyZBL2
Content-Type: application/pgp-signature
Content-Description: OpenPGP digital signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJWCVQ1AAoJEMHFVuy5l8Y49GoP/A2WA0irXqa0LltYABpLbVm7
PKI9gI55f0S2q4hxbZn6DaYM6RGV7vKBqnDxwDhIDZ7BYu1duyMKWUEm8TdWrcGb
CXO93AFiedNUWGuocOguTw5QdIqq838k3ZZo4BHox6egNTUVJ1PimzQivNIurGQw
4eCMzbPp7hR4tgtazx/sZUI/F9diPz9yy1qhpCN6idAYGLnBdLrkm2a97Hb3tYRx
k6CMHXt8EGhhrVNLrpHMqVieVQWxmmTi1iqNoKS9HgiQULtzpRZ2QCkSPuMUvEVW
CRCsPMTjMvcRUVLmrzW5PaOcpEfPLTucxuM9Iam1b9FRbbaU2UorJZOhyz7KK51u
O+8WfuhdKdDHjP0vMisdqbuLANGw8vPjtatk0Oxmm9xvOmGgTvTRrS02dAikJqCp
8HdeT4ORYdhwsqDFqbdT5XR1SIt0Pkf3O73zZ4EuMpU5TX8YXWmfrq+MiLjmMZBs
uiA8BChdsaIMi+dmZAWZLCdJodQJhdBpAsUBF54i2juOwFQLvUjsLo/xbh0iRRrx
I/OxJehiAmN1oU9/T0Ss1R8LQ/yEFxEkYtEgQPRN8yzedWOagaNmW8UA9MJXEF9J
N506ya3Prl252qrmD7QwTL6qx5cBYYgyADiiOckKF/6Y44bqny0YovQsIkl1lBAu
Yq5xaLwpBObwih0YVmbc
=xWUY
-----END PGP SIGNATURE-----
--Sig_/bD5l5X52ZlZIy9ylmDyZBL2--
9 years, 1 month
[Users] Cant assign Quotas to groups anymore?
by Maurice James
--_c1bf2036-7c19-43f7-bca5-b1f997d755b7_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
3.4.0-0.13.rc.el6See the error below
User admin failed to grant permission for Role QuotaConsumer on Quota Mobil=
ePolicy to User/Group Non interactive user. =
--_c1bf2036-7c19-43f7-bca5-b1f997d755b7_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'><div><span class=3D"gwt-InlineLa=
bel"> =3B3.4.0-0.13.rc.el6</span></div><div>See the error below</div><d=
iv><br></div><div><br></div>User admin failed to grant permission for Role =
QuotaConsumer on Quota MobilePolicy to User/Group Non interactive user. =
</div></body>
</html>=
--_c1bf2036-7c19-43f7-bca5-b1f997d755b7_--
9 years, 1 month
[ANN] oVirt 3.6.0 First Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability
of the First Release Candidate of oVirt 3.6 for testing, as of September
28th, 2015.
This release is available now for Fedora 22,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar),
Fedora 21 and Fedora 22.
Highly experimental support for Debian 8.1 Jessie has been added too.
This release of oVirt 3.6.0 includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs
fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
New oVirt Node ISO and oVirt Live ISO will be available soon as well[2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
[1] http://www.ovirt.org/OVirt_3.6_Release_Notes
[2] http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 1 month
Take a quick survey on oVirt VM Level Dashboards
by Serena Doyle
Our UX team is looking for #oVirt VM Level Dashboard feedback. Take the
survey today! http://ow.ly/SJ5PF
Thanks!
- Serena
--
- Serena Chechile Doyle
*UXD | Design Architect*
*Red Hat*
*Cell* 508-769-7715 | *IRC* - serena | *Skype* - serenamarie125 | *Twitter* -
@serenamarie125
9 years, 1 month
Hyper-V in a guest
by Alan Murrell
Hello,
I am running oVirt 3.5.3.1. I am trying to setup a virtual lab to
emulate a client's environment to do some testing. They are running a
Hyper-V environment.
I created a Microsoft Server 2008R2 guest. When I tried to install
the Hyper-V role, the system said that I needed to have hardware
virtualization support enabled in the BIOS.
I came across some posts about enabling "Hyper-V enlightenment" in KVM, but:
1.) I am not sure this is what I am actually looking ofr. It looks
like it is to make some Windows guests perform a bit better (as if
they were in a Hyper-V environment); and
2.) Assuming this is what I am looking for, I am not sure how to
enable it in oVirt (I saw that there were a couple of patches to
support it in oVirt 3.5). I am not seeing anything in the GUI. Is
this something that would need to be edited on the CLI? If so, how?
if ti can be enabled for a guest in the GUI (via it's "Edit"
settings?), where do I do so?
Thanks! :-)
9 years, 1 month
Importing disk images with import-to-ovirt.pl = Authentication Error
by Adrian Garay
--------------060608090400030400020601
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
We have an existing setup consisting of virt-manager/libvirt/KVM
hypervisors that we're planning to migrate to Ovirt. Given that all of
our guests are existing KVM/virtio images, it does not make sense for us
to virt-v2v them over because of the ensuing registry/kernel/etc.
changes that may be unnecessarily applied.
One solution to this is the import-to-ovirt.pl script created by
Redhat's maintainer of virt-v2v.
https://rwmj.wordpress.com/2015/09/18/importing-kvm-guests-to-ovirt-or-rhev/
Running this script on a host against a disk image will import the image
to exported storage and mate it to a basic configuration so it can be
imported easily afterward, or at least it should.
Our current test set up consists of hosted engine Ovirt 3.5.4 on Centos
7.1. When attempting to import an image using this script on the host
we get the following errors:
|libvirt needs authentication to connect to libvirt URI qemu:///system
||libvirt: XML-RPC error : authentication failed: authentication failed
||could not connect to libvirt (URI = qemu:///system): authentication
failed: authentication failed at ./import-to-ovirt.pl line 230.
|
I understand that diagnosing this script is well outside of the context
of this mailing list, but this is clearly just an authentication
problem. We've tried the root, ovirt and the admin@internal credentials
and none of them work. Is there a default login/password to access
libvirt on an Ovirt host?
Our system works as it should otherwise.
Thanks in advance to anyone that can shed light here.
--------------060608090400030400020601
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
We have an existing setup consisting of virt-manager/libvirt/KVM
hypervisors that we're planning to migrate to Ovirt. Given that all
of our guests are existing KVM/virtio images, it does not make sense
for us to virt-v2v them over because of the ensuing
registry/kernel/etc. changes that may be unnecessarily applied.<br>
<br>
One solution to this is the import-to-ovirt.pl script created by
Redhat's maintainer of virt-v2v. <a
href="https://rwmj.wordpress.com/2015/09/18/importing-kvm-guests-to-ovirt-or-rhev/"><a class="moz-txt-link-freetext" href="https://rwmj.wordpress.com/2015/09/18/importing-kvm-guests-to-ovirt-or-rhev/">https://rwmj.wordpress.com/2015/09/18/importing-kvm-guests-to-ovirt-or-rhev/</a></a>
Running this script on a host against a disk image will import the
image to exported storage and mate it to a basic configuration so it
can be imported easily afterward, or at least it should.<br>
<br>
Our current test set up consists of hosted engine Ovirt 3.5.4 on
Centos 7.1. When attempting to import an image using this script on
the host we get the following errors:<br>
<br>
<code>libvirt needs authentication to connect to libvirt URI
qemu:///system<br>
</code><code>libvirt: XML-RPC error : authentication failed:
authentication failed<br>
</code><code>could not connect to libvirt (URI = qemu:///system):
authentication failed: authentication failed at
./import-to-ovirt.pl line 230.<br>
</code><br>
I understand that diagnosing this script is well outside of the
context of this mailing list, but this is clearly just an
authentication problem. We've tried the root, ovirt and the
admin@internal credentials and none of them work. Is there a
default login/password to access libvirt on an Ovirt host? <br>
<br>
Our system works as it should otherwise. <br>
<br>
Thanks in advance to anyone that can shed light here.<br>
</body>
</html>
--------------060608090400030400020601--
9 years, 1 month
ovirt 3.6.0 Sixth Beta Release
by Rudi Schmitz
I have a simple setup. Machine is a node. going to deploy from iscsi on
thenetwork but dont get that far. I installed 3.6 sixth beta release node
iso on a machine. IP adress is setup. Then I try to deploy hosted engine
over ssh. I input a local http url of the centos 7 iso and hit Deploy.
The tui stops and I get:
An error appeared in the UI: AttributeError("'TransactionProgressDialog'
object has no attribute 'event'",)
Press ENTER to logout ...
or enter 's' to drop to shell
The http iso file does exist and is working. Is the deploy hosted engine on
a node working for others?
9 years, 2 months