[ovirt-users] Cannot mount gluster storage data

Jean-Michel FRANCOIS jmfrancois at anaxys.com
Fri Sep 25 15:16:45 UTC 2015


Hi Ravi,

Thanks for looking at my problem.
The two host are Centos6.6 the first one has been installed one year ago 
and the second one this week.

Jean-Michel
>
> On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote:
>> Hi Ovirt users,
>>
>> I'm running ovirt hosted 3.4 with gluster data storage.
>> When I add a new host (Centos 6.6) the data storage (as a glsuterfs) 
>> cannot be mount.
>> I have the following errors in gluster client log file :
>> [2015-09-24 12:27:22.636221] I [MSGID: 101190] 
>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started 
>> thread with index 1
>> [2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] 
>> 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available)
>> [2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] 
>> (--> 
>> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b] 
>> (--> 
>> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] 
>> (--> 
>> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] 
>> (--> 
>> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3bb] 
>> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] 
>> ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) 
>> op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=0x1)
>> [2015-09-24 12:27:22.637333] E 
>> [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch 
>> volume file (key:/data)
>> [2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] 
>> (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) 
>> [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) 
>> [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 
>> 0-: received signum (0), shutting down
>> [2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: 
>> Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'.
>> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] 
>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] 
>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] 
>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: 
>> received signum (15), shutting down
>> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] 
>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] 
>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] 
>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: 
>> received signum (15), shutting down
>> And nothing server side.
>>
>
> This does look like an op-version issue. Adding Atin for any possible 
> help.
> -Ravi
>
>> I suppose it is a version issue since on server side I have
>> glusterfs-api-3.6.3-1.el6.x86_64
>> glusterfs-fuse-3.6.3-1.el6.x86_64
>> glusterfs-libs-3.6.3-1.el6.x86_64
>> glusterfs-3.6.3-1.el6.x86_64
>> glusterfs-cli-3.6.3-1.el6.x86_64
>> glusterfs-rdma-3.6.3-1.el6.x86_64
>> glusterfs-server-3.6.3-1.el6.x86_64
>>
>> and on the new host :
>> glusterfs-3.7.4-2.el6.x86_64
>> glusterfs-api-3.7.4-2.el6.x86_64
>> glusterfs-libs-3.7.4-2.el6.x86_64
>> glusterfs-fuse-3.7.4-2.el6.x86_64
>> glusterfs-cli-3.7.4-2.el6.x86_64
>> glusterfs-server-3.7.4-2.el6.x86_64
>> glusterfs-client-xlators-3.7.4-2.el6.x86_64
>> glusterfs-rdma-3.7.4-2.el6.x86_64
>>
>> But since it is a production system, i'm not confident about 
>> performing gluster server upgrade.
>> Mounting a gluster volume as NFS is possible (the engine data storage 
>> has been mounted succesfully).
>>
>> I'm asking here because glusterfs comes from the ovirt3.4 rpm 
>> repository.
>>
>> If anyone have a hint to this problem
>>
>> thanks
>> Jean-Michel
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150925/a3e04bfa/attachment-0001.html>


More information about the Users mailing list