[ovirt-users] Hot to force glusterfs to use RDMA?
Arman Khalatyan
arm2arm at gmail.com
Thu Mar 2 11:52:13 UTC 2017
I am not able to mount with RDMA over cli....
Are there some volfile parameters needs to be tuned?
/usr/bin/mount -t glusterfs -o
backup-volfile-servers=10.10.10.44:10.10.10.42:10.10.10.41,transport=rdma
10.10.10.44:/GluReplica /mnt
[2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9
(args: /usr/sbin/glusterfs --volfile-server=10.10.10.44
--volfile-server=10.10.10.44 --volfile-server=10.10.10.42
--volfile-server=10.10.10.41 --volfile-server-transport=rdma
--volfile-id=/GluReplica.rdma /mnt)
[2017-03-02 11:49:47.812699] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2017-03-02 11:49:47.825210] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 2
[2017-03-02 11:49:47.828996] W [MSGID: 103071]
[rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
channel creation failed [No such device]
[2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
0-GluReplica-client-2: Failed to initialize IB Device
[2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
0-rpc-transport: 'rdma' initialization failed
[2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_connection_init]
0-GluReplica-client-2: loading of new rpc-transport failed
[2017-03-02 11:49:47.829325] I [MSGID: 101053]
[mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=588 max=0
total=0
[2017-03-02 11:49:47.829371] I [MSGID: 101053]
[mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=124 max=0
total=0
[2017-03-02 11:49:47.829391] E [MSGID: 114022]
[client.c:2530:client_init_rpc] 0-GluReplica-client-2: failed to initialize
RPC
[2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init]
0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2'
failed, review your volfile again
[2017-03-02 11:49:47.829425] E [MSGID: 101066]
[graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2: initializing
translator failed
[2017-03-02 11:49:47.829436] E [MSGID: 101176]
[graph.c:673:glusterfs_graph_activate] 0-graph: init failed
[2017-03-02 11:49:47.830003] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7f524c9d65d2]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
received signum (1), shutting down
[2017-03-02 11:49:47.830053] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting
'/mnt'.
[2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
received signum (15), shutting down
[2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
received signum (15), shutting down
On Thu, Mar 2, 2017 at 12:11 PM, Sahina Bose <sabose at redhat.com> wrote:
> You will need to pass additional mount options while creating the storage
> domain (transport=rdma)
>
>
> Please let us know if this works.
>
> On Thu, Mar 2, 2017 at 2:42 PM, Arman Khalatyan <arm2arm at gmail.com> wrote:
>
>> Hi,
>> Are there way to force the connections over RDMA only?
>> If I check host mounts I cannot see rdma mount option:
>> mount -l| grep gluster
>> 10.10.10.44:/GluReplica on /rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
>> type fuse.glusterfs (rw,relatime,user_id=0,group_i
>> d=0,default_permissions,allow_other,max_read=131072)
>>
>> I have glusterized 3 nodes:
>>
>> GluReplica
>> Volume ID:
>> ee686dfe-203a-4caa-a691-26353460cc48
>> Volume Type:
>> Replicate (Arbiter)
>> Replica Count:
>> 2 + 1
>> Number of Bricks:
>> 3
>> Transport Types:
>> TCP, RDMA
>> Maximum no of snapshots:
>> 256
>> Capacity:
>> 3.51 TiB total, 190.56 GiB used, 3.33 TiB free
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170302/a854e91d/attachment.html>
More information about the Users
mailing list