[ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?
Mohammed Rafi K C
rkavunga at redhat.com
Fri Mar 3 08:07:57 UTC 2017
Hi Arman,
On 03/03/2017 12:27 PM, Arman Khalatyan wrote:
> Dear Deepak, thank you for the hints, which gluster are you using?
> As you can see from my previous email that the RDMA connection tested
> with qperf. It is working as expected. In my case the clients are
> servers as well, they are hosts for the ovirt. Disabling selinux is
> nor recommended by ovirt, but i will give a try.
Gluster use IPoIB as mentioned by Deepak. So qperf with default options
may not be a good choice to test IPoIB. Because it will fallback to any
link available between the mentioned server and client. You can force
this behavior, please refer the link [1].
In addition to that, Can you please provide your gluster version,
glusterd logs and brick logs. Because since it complains about absence
of the device, mostly it could be a set up issue. Otherwise it could
have been a permission denied error, I'm not completely ruling out the
possibility of selinux preventing the creation if IB channel. We had
this issue in rhel which is fixed in 7.2 [2] .
[1] :
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/sec-Testing_an_RDMA_network_after_IPoIB_is_configured.html
[2] : https://bugzilla.redhat.com/show_bug.cgi?id=1386620
Regards
Rafi KC
>
> Am 03.03.2017 7:50 vorm. schrieb "Deepak Naidu" <dnaidu at nvidia.com
> <mailto:dnaidu at nvidia.com>>:
>
> I have been testing glusterfs over RDMA & below is the command I
> use. Reading up the logs, it looks like your IB(InfiniBand) device
> is not being initialized. I am not sure if u have an issue on the
> client IB or the storage server IB. Also have you configured ur IB
> devices correctly. I am using IPoIB.
>
> Can you check your firewall, disable selinux, I think, you might
> have checked it already ?
>
>
>
> *mount -t glusterfs -o transport=rdma storageN1:/vol0 /mnt/vol0*
>
>
>
>
>
> · *The below error seems if you have issue starting your
> volume. I had issue, when my transport was set to tcp,rdma. I had
> to force start my volume. If I had set it only to tcp on the
> volume, the volume would start easily.*
>
>
>
> [2017-03-02 11:49:47.829391] E [MSGID: 114022]
> [client.c:2530:client_init_rpc] 0-GluReplica-client-2: failed to
> initialize RPC
> [2017-03-02 11:49:47.829413] E [MSGID: 101019]
> [xlator.c:433:xlator_init] 0-GluReplica-client-2: Initialization
> of volume 'GluReplica-client-2' failed, review your volfile again
> [2017-03-02 11:49:47.829425] E [MSGID: 101066]
> [graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2:
> initializing translator failed
> [2017-03-02 11:49:47.829436] E [MSGID: 101176]
> [graph.c:673:glusterfs_graph_activate] 0-graph: init failed
>
>
>
> · *The below error seems if you have issue with IB device.
> If not configured properly.*
>
>
>
> [2017-03-02 11:49:47.828996] W [MSGID: 103071]
> [rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm
> event channel creation failed [No such device]
> [2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
> 0-GluReplica-client-2: Failed to initialize IB Device
> [2017-03-02 11:49:47.829080] W
> [rpc-transport.c:354:rpc_transport_load] 0-rpc-transport: 'rdma'
> initialization failed
>
>
>
>
>
> --
>
> Deepak
>
>
>
>
>
> *From:*gluster-users-bounces at gluster.org
> <mailto:gluster-users-bounces at gluster.org>
> [mailto:gluster-users-bounces at gluster.org
> <mailto:gluster-users-bounces at gluster.org>] *On Behalf Of *Sahina Bose
> *Sent:* Thursday, March 02, 2017 10:26 PM
> *To:* Arman Khalatyan; gluster-users at gluster.org
> <mailto:gluster-users at gluster.org>; Rafi Kavungal Chundattu Parambil
> *Cc:* users
> *Subject:* Re: [Gluster-users] [ovirt-users] Hot to force
> glusterfs to use RDMA?
>
>
>
> [Adding gluster users to help with error]
>
> [2017-03-02 11:49:47.828996] W [MSGID: 103071]
> [rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm
> event channel creation failed [No such device]
>
>
>
> On Thu, Mar 2, 2017 at 5:36 PM, Arman Khalatyan <arm2arm at gmail.com
> <mailto:arm2arm at gmail.com>> wrote:
>
> BTW RDMA is working as expected:
> root at clei26 ~]# qperf clei22.vib tcp_bw tcp_lat
> tcp_bw:
> bw = 475 MB/sec
> tcp_lat:
> latency = 52.8 us
> [root at clei26 ~]#
>
> thank you beforehand.
>
> Arman.
>
>
>
> On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan
> <arm2arm at gmail.com <mailto:arm2arm at gmail.com>> wrote:
>
> just for reference:
> gluster volume info
>
> Volume Name: GluReplica
> Type: Replicate
> Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp,rdma
> Bricks:
> Brick1: 10.10.10.44:/zclei22/01/glu
> Brick2: 10.10.10.42:/zclei21/01/glu
> Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)
> Options Reconfigured:
> network.ping-timeout: 30
> server.allow-insecure: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.data-self-heal-algorithm: full
> features.shard: on
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.readdir-ahead: on
> nfs.disable: on
>
>
>
> [root at clei21 ~]# gluster volume status
> Status of volume: GluReplica
> Gluster process TCP Port RDMA Port
> Online Pid
> ------------------------------------------------------------------------------
> Brick 10.10.10.44:/zclei22/01/glu 49158 49159
> Y 15870
> Brick 10.10.10.42:/zclei21/01/glu 49156 49157
> Y 17473
> Brick 10.10.10.41:/zclei26/01/glu 49153 49154
> Y 18897
> Self-heal Daemon on localhost N/A N/A
> Y 17502
> Self-heal Daemon on 10.10.10.41 N/A N/A
> Y 13353
> Self-heal Daemon on 10.10.10.44 N/A N/A
> Y 32745
>
> Task Status of Volume GluReplica
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
>
>
> On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan
> <arm2arm at gmail.com <mailto:arm2arm at gmail.com>> wrote:
>
> I am not able to mount with RDMA over cli....
>
> Are there some volfile parameters needs to be tuned?
> /usr/bin/mount -t glusterfs -o
> backup-volfile-servers=10.10.10.44:10.10.10.42:10.10.10.41,transport=rdma
> 10.10.10.44:/GluReplica /mnt
>
> [2017-03-02 11:49:47.795511] I [MSGID: 100030]
> [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.8.9 (args: /usr/sbin/glusterfs
> --volfile-server=10.10.10.44 --volfile-server=10.10.10.44
> --volfile-server=10.10.10.42 --volfile-server=10.10.10.41
> --volfile-server-transport=rdma --volfile-id=/GluReplica.rdma /mnt)
> [2017-03-02 11:49:47.812699] I [MSGID: 101190]
> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
> [2017-03-02 11:49:47.825210] I [MSGID: 101190]
> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
> [2017-03-02 11:49:47.828996] W [MSGID: 103071]
> [rdma.c:4589:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm
> event channel creation failed [No such device]
> [2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
> 0-GluReplica-client-2: Failed to initialize IB Device
> [2017-03-02 11:49:47.829080] W
> [rpc-transport.c:354:rpc_transport_load] 0-rpc-transport: 'rdma'
> initialization failed
> [2017-03-02 11:49:47.829272] W
> [rpc-clnt.c:1070:rpc_clnt_connection_init] 0-GluReplica-client-2:
> loading of new rpc-transport failed
> [2017-03-02 11:49:47.829325] I [MSGID: 101053]
> [mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=588
> max=0 total=0
> [2017-03-02 11:49:47.829371] I [MSGID: 101053]
> [mem-pool.c:641:mem_pool_destroy] 0-GluReplica-client-2: size=124
> max=0 total=0
> [2017-03-02 11:49:47.829391] E [MSGID: 114022]
> [client.c:2530:client_init_rpc] 0-GluReplica-client-2: failed to
> initialize RPC
> [2017-03-02 11:49:47.829413] E [MSGID: 101019]
> [xlator.c:433:xlator_init] 0-GluReplica-client-2: Initialization
> of volume 'GluReplica-client-2' failed, review your volfile again
> [2017-03-02 11:49:47.829425] E [MSGID: 101066]
> [graph.c:324:glusterfs_graph_init] 0-GluReplica-client-2:
> initializing translator failed
> [2017-03-02 11:49:47.829436] E [MSGID: 101176]
> [graph.c:673:glusterfs_graph_activate] 0-graph: init failed
> [2017-03-02 11:49:47.830003] W
> [glusterfsd.c:1327:cleanup_and_exit]
> (-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
> -->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172)
> [0x7f524c9d65d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
> [0x7f524c9d5b4b] ) 0-: received signum (1), shutting down
> [2017-03-02 11:49:47.830053] I [fuse-bridge.c:5794:fini] 0-fuse:
> Unmounting '/mnt'.
> [2017-03-02 11:49:47.831014] W
> [glusterfsd.c:1327:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] )
> 0-: received signum (15), shutting down
> [2017-03-02 11:49:47.831014] W
> [glusterfsd.c:1327:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f524c9d5cd5]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] )
> 0-: received signum (15), shutting down
>
>
>
>
> On Thu, Mar 2, 2017 at 12:11 PM, Sahina Bose <sabose at redhat.com
> <mailto:sabose at redhat.com>> wrote:
>
> You will need to pass additional mount options while creating the
> storage domain (transport=rdma)
>
> Please let us know if this works.
>
>
>
> On Thu, Mar 2, 2017 at 2:42 PM, Arman Khalatyan <arm2arm at gmail.com
> <mailto:arm2arm at gmail.com>> wrote:
>
> Hi,
>
> Are there way to force the connections over RDMA only?
>
> If I check host mounts I cannot see rdma mount option:
> mount -l| grep gluster
> 10.10.10.44:/GluReplica on
> /rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica type
> fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>
> I have glusterized 3 nodes:
>
> GluReplica
> Volume ID:
> ee686dfe-203a-4caa-a691-26353460cc48
> Volume Type:
> Replicate (Arbiter)
> Replica Count:
> 2 + 1
> Number of Bricks:
> 3
> Transport Types:
> TCP, RDMA
> Maximum no of snapshots:
> 256
> Capacity:
> 3.51 TiB total, 190.56 GiB used, 3.33 TiB free
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org <mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
>
>
>
>
>
>
>
> ------------------------------------------------------------------------
> This email message is for the sole use of the intended
> recipient(s) and may contain confidential information. Any
> unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient, please contact
> the sender by reply email and destroy all copies of the original
> message.
> ------------------------------------------------------------------------
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170303/dc0ff5ac/attachment-0001.html>
More information about the Users
mailing list