<div dir="ltr">I think there are some bug in the vdsmd checks;<br><br>2017-03-03 11:15:42,413 ERROR (jsonrpc/7) [storage.HSM] Could not connect to storageServer (hsm:2391)<br>Traceback (most recent call last):<br> File "/usr/share/vdsm/storage/hsm.py", line 2388, in connectStorageServer<br> conObj.connect()<br> File "/usr/share/vdsm/storage/storageServer.py", line 167, in connect<br> self.getMountObj().getRecord().fs_file)<br> File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line 237, in getRecord<br> (self.fs_spec, self.fs_file))<br>OSError: [Errno 2] Mount of `10.10.10.44:/GluReplica` at `/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica` does not exist<br>2017-03-03 11:15:42,416 INFO (jsonrpc/7) [dispatcher] Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 100, 'id': u'4b2ea911-ef35-4de0-bd11-c4753e6048d8'}]} (logUtils:52)<br>2017-03-03 11:15:42,417 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 2.63 seconds (__init__:515)<br>2017-03-03 11:15:44,239 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:515)<br><br>[root@clei21 ~]# df | grep glu<br>10.10.10.44:/GluReplica.rdma 3770662912 407818240 3362844672 11% /rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica<br><br>ls "/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica"<br>09f95051-bc93-4cf5-85dc-16960cee74e4 __DIRECT_IO_TEST__<br>[root@clei21 ~]# touch /rhev/data-center/mnt/glusterSD/<a href="http://10.10.10.44">10.10.10.44</a>\:_GluReplica/testme.txt<br>[root@clei21 ~]# unlink /rhev/data-center/mnt/glusterSD/<a href="http://10.10.10.44">10.10.10.44</a>\:_GluReplica/testme.txt<br><br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 3, 2017 at 11:51 AM, Arman Khalatyan <span dir="ltr"><<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div>Thank you all for the nice hints.<br></div>Somehow my host was not able to access the userspace RDMA, after installing:<br>yum install -y libmlx4.x86_64<br></div><div><br>I can mount:<span class=""><br>/usr/bin/mount -t glusterfs -o backup-volfile-servers=10.10.<wbr>10.44:10.10.10.42:10.10.10.41,<wbr>transport=rdma 10.10.10.44:/GluReplica /mnt<br></span>10.10.10.44:/GluReplica.rdma 3770662912 407817216 <a href="tel:(336)%20284-5696" value="+13362845696" target="_blank">3362845696</a> 11% /mnt<br><br></div>Looks the rdma and gluster are working except ovirt GUI:(<br><br>With MountOptions:<br>backup-volfile-servers=10.10.<wbr>10.44:10.10.10.42:10.10.10.41,<wbr>transport=rdma<br><br></div><div>I am not able to activate storage.<br></div><br><br></div>---Gluster Status ----<span class=""><br>gluster volume status <br>Status of volume: GluReplica<br>Gluster process <wbr> TCP Port RDMA Port Online Pid<br>------------------------------<wbr>------------------------------<wbr>------------------<br></span>Brick 10.10.10.44:/zclei22/01/glu <wbr> 49162 49163 Y 17173<br>Brick 10.10.10.42:/zclei21/01/glu <wbr> 49156 49157 Y 17113<br>Brick 10.10.10.41:/zclei26/01/glu <wbr> 49157 49158 Y 16404<br>Self-heal Daemon on localhost N/A N/A Y 16536<br>Self-heal Daemon on clei21.vib N/A N/A Y 17134<br>Self-heal Daemon on 10.10.10.44 N/A N/A Y 17329<span class=""><br> <br>Task Status of Volume GluReplica<br>------------------------------<wbr>------------------------------<wbr>------------------<br>There are no active volume tasks<br><br></span><div><br>-----IB status -----<br><br><div>ibstat<br>CA 'mlx4_0'<br> CA type: MT26428<br> Number of ports: 1<br> Firmware version: 2.7.700<br> Hardware version: b0<br> Node GUID: 0x002590ffff163758<br> System image GUID: 0x002590ffff16375b<br> Port 1:<br> State: Active<br> Physical state: LinkUp<br> Rate: 10<br> Base lid: 273<br> LMC: 0<br> SM lid: 3<br> Capability mask: 0x02590868<br> Port GUID: 0x002590ffff163759<br> Link layer: InfiniBand<br><br>Not bad for SDR switch ! :-P<br> qperf clei22.vib ud_lat ud_bw<br>ud_lat:<br> latency = 23.6 us<br>ud_bw:<br> send_bw = 981 MB/sec<br> recv_bw = 980 MB/sec<br><br><br><div><br></div></div></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 3, 2017 at 9:08 AM, Deepak Naidu <span dir="ltr"><<a href="mailto:dnaidu@nvidia.com" target="_blank">dnaidu@nvidia.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div link="blue" vlink="purple" lang="EN-US">
<div class="m_-9222584374069772987m_-7301199218385228754WordSection1"><span>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">>></span> As you can see from my previous email that the RDMA connection tested with qperf.<u></u><u></u></p>
</span><p class="MsoNormal">I think you have wrong command. Your testing <b>TCP & not RDMA.
</b>Also check if you have RDMA & IB modules loaded on your hosts.<u></u><u></u></p><span>
<p class="MsoNormal" style="margin-bottom:12.0pt">root@clei26 ~]# qperf clei22.vib
<span style="background:yellow">tcp_bw tcp_lat</span><br>
<span style="background:yellow">tcp_bw:</span><br>
bw = 475 MB/sec<br>
<span style="background:yellow">tcp_lat:</span><br>
latency = 52.8 us<br>
[root@clei26 ~]# <u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
</span><p class="MsoNormal"><b><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">Please run below command to test RDMA<u></u><u></u></span></u></b></p>
<p class="MsoNormal"><b><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u><span style="text-decoration:none"> </span><u></u></span></u></b></p>
<p class="MsoNormal"><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">[root@storageN2 ~]# qperf storageN1
<b>ud_lat</b> <b>ud_bw</b><u></u><u></u></span></u></p>
<p class="MsoNormal"><b><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">ud_lat</span></u></b><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">:<u></u><u></u></span></u></p>
<p class="MsoNormal"><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> latency = 7.51 us<u></u><u></u></span></u></p>
<p class="MsoNormal"><b><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">ud_bw</span></u></b><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">:<u></u><u></u></span></u></p>
<p class="MsoNormal"><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> send_bw = 9.21 GB/sec<u></u><u></u></span></u></p>
<p class="MsoNormal"><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> recv_bw = 9.21 GB/sec<u></u><u></u></span></u></p>
<p class="MsoNormal"><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">[root@sc-sdgx-202 ~]#<u></u><u></u></span></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">Read qperf man pages for more info.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> * To run a TCP bandwidth and latency test:<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> qperf myserver tcp_bw tcp_lat<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> * To run a UDP latency test and then cause the server to terminate:<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> qperf myserver udp_lat quit<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> * To measure the RDMA UD latency and bandwidth:<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> qperf myserver ud_lat ud_bw<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> * To measure RDMA UC bi-directional bandwidth:<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> qperf myserver rc_bi_bw<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> * To get a range of TCP latencies with a message size from 1 to 64K<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> qperf myserver -oo msg_size:1:64K:*2 -vu tcp_lat<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><b><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">Check if you have RDMA & IB modules loaded<u></u><u></u></span></u></b></p>
<p class="MsoNormal"><b><u><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u><span style="text-decoration:none"> </span><u></u></span></u></b></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">lsmod | grep -i ib<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">lsmod | grep -i rdma<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">--<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">Deepak<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">From:</span></b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif""> Arman Khalatyan [mailto:<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>]
<br>
<b>Sent:</b> Thursday, March 02, 2017 10:57 PM<br>
<b>To:</b> Deepak Naidu<br>
<b>Cc:</b> Rafi Kavungal Chundattu Parambil; <a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a>; users; Sahina Bose<br>
<b>Subject:</b> RE: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?<u></u><u></u></span></p><div><div class="m_-9222584374069772987h5">
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<p class="MsoNormal">Dear Deepak, thank you for the hints, which gluster are you using?<u></u><u></u></p>
<div>
<p class="MsoNormal">As you can see from my previous email that the RDMA connection tested with qperf. It is working as expected. In my case the clients are servers as well, they are hosts for the ovirt. Disabling selinux is nor recommended by ovirt, but i
will give a try.<u></u><u></u></p>
</div>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
<div>
<p class="MsoNormal">Am 03.03.2017 7:50 vorm. schrieb "Deepak Naidu" <<a href="mailto:dnaidu@nvidia.com" target="_blank">dnaidu@nvidia.com</a>>:<u></u><u></u></p>
<div>
<div>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">I have been testing glusterfs over RDMA & below is the command I use. Reading up the logs, it looks
like your IB(InfiniBand) device is not being initialized. I am not sure if u have an issue on the client IB or the storage server IB. Also have you configured ur IB devices correctly. I am using IPoIB.</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">Can you check your firewall, disable selinux, I think, you might have checked it already ?</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">mount -t glusterfs -o transport=rdma storageN1:/vol0 /mnt/vol0</span></b><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> </span><u></u><u></u></p>
<p class="m_-9222584374069772987m_-7301199218385228754m8813890542634899653msolistparagraph"><span style="font-size:11.0pt;font-family:Symbol;color:#1f497d">·</span><span style="font-size:7.0pt;color:#1f497d">
</span><b><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">The below error seems if you have issue starting your volume. I had issue, when my transport was set to tcp,rdma. I had to force start my volume. If I had set it only
to tcp on the volume, the volume would start easily.</span></b><u></u><u></u></p>
<div>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span style="background:yellow">[2017-03-02 11:49:47.829391] E [MSGID: 114022] [client.c:2530:client_init_rpc<wbr>] 0-GluReplica-client-2: failed to initialize RPC<br>
[2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2' failed, review your volfile again<br>
[2017-03-02 11:49:47.829425] E [MSGID: 101066] [graph.c:324:glusterfs_graph_i<wbr>nit] 0-GluReplica-client-2: initializing translator failed<br>
[2017-03-02 11:49:47.829436] E [MSGID: 101176] [graph.c:673:glusterfs_graph_a<wbr>ctivate] 0-graph: init failed</span><u></u><u></u></p>
<p class="MsoNormal"><span style="background:yellow"> </span><u></u><u></u></p>
</div>
<p class="m_-9222584374069772987m_-7301199218385228754m8813890542634899653msolistparagraph"><span style="font-size:11.0pt;font-family:Symbol;color:#1f497d">·</span><span style="font-size:7.0pt;color:#1f497d">
</span><b><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">The below error seems if you have issue with IB device. If not configured properly.</span></b><u></u><u></u></p>
<div>
<p class="MsoNormal"><span style="background:yellow"> </span><u></u><u></u></p>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span style="background:yellow">[2017-03-02 11:49:47.828996] W [MSGID: 103071] [rdma.c:4589:__gf_rdma_ctx_cre<wbr>ate] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such
device]<br>
[2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init] 0-GluReplica-client-2: Failed to initialize IB Device<br>
[2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_trans<wbr>port_load] 0-rpc-transport: 'rdma' initialization failed</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> </span><u></u><u></u></p>
</div>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">--</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">Deepak</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"> </span><u></u><u></u></p>
<p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">From:</span></b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">
<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@gluster.<wbr>org</a> [mailto:<a href="mailto:gluster-users-bounces@gluster.org" target="_blank">gluster-users-bounces@<wbr>gluster.org</a>]
<b>On Behalf Of </b>Sahina Bose<br>
<b>Sent:</b> Thursday, March 02, 2017 10:26 PM<br>
<b>To:</b> Arman Khalatyan; <a href="mailto:gluster-users@gluster.org" target="_blank">
gluster-users@gluster.org</a>; Rafi Kavungal Chundattu Parambil<br>
<b>Cc:</b> users<br>
<b>Subject:</b> Re: [Gluster-users] [ovirt-users] Hot to force glusterfs to use RDMA?</span><u></u><u></u></p>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
<div>
<p class="MsoNormal">[Adding gluster users to help with error]<br>
<br>
[2017-03-02 11:49:47.828996] W [MSGID: 103071] [rdma.c:4589:__gf_rdma_ctx_cre<wbr>ate] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
<div>
<p class="MsoNormal">On Thu, Mar 2, 2017 at 5:36 PM, Arman Khalatyan <<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>> wrote:<u></u><u></u></p>
<div>
<div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">BTW RDMA is working as expected:<br>
root@clei26 ~]# qperf clei22.vib tcp_bw tcp_lat<br>
tcp_bw:<br>
bw = 475 MB/sec<br>
tcp_lat:<br>
latency = 52.8 us<br>
[root@clei26 ~]# <u></u><u></u></p>
</div>
<p class="MsoNormal">thank you beforehand.<u></u><u></u></p>
</div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span class="m_-9222584374069772987m_-7301199218385228754m8813890542634899653hoenzb"><span style="color:#888888">Arman.</span></span><u></u><u></u></p>
</div>
<div>
<div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
<div>
<p class="MsoNormal">On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan <<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>> wrote:<u></u><u></u></p>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">just for reference:<br>
gluster volume info <br>
<br>
Volume Name: GluReplica<br>
Type: Replicate<br>
Volume ID: ee686dfe-203a-4caa-a691-263534<wbr>60cc48<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x (2 + 1) = 3<br>
Transport-type: tcp,rdma<br>
Bricks:<br>
Brick1: 10.10.10.44:/zclei22/01/glu<br>
Brick2: 10.10.10.42:/zclei21/01/glu<br>
Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)<br>
Options Reconfigured:<br>
network.ping-timeout: 30<br>
server.allow-insecure: on<br>
storage.owner-gid: 36<br>
storage.owner-uid: 36<br>
cluster.data-self-heal-algorit<wbr>hm: full<br>
features.shard: on<br>
cluster.server-quorum-type: server<br>
cluster.quorum-type: auto<br>
network.remote-dio: enable<br>
cluster.eager-lock: enable<br>
performance.stat-prefetch: off<br>
performance.io-cache: off<br>
performance.read-ahead: off<br>
performance.quick-read: off<br>
performance.readdir-ahead: on<br>
nfs.disable: on<br>
<br>
<br>
<br>
[root@clei21 ~]# gluster volume status <br>
Status of volume: GluReplica<br>
Gluster process <wbr> TCP Port RDMA Port Online Pid<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick 10.10.10.44:/zclei22/01/glu <wbr> 49158 49159 Y 15870<br>
Brick 10.10.10.42:/zclei21/01/glu <wbr> 49156 49157 Y 17473<br>
Brick 10.10.10.41:/zclei26/01/glu <wbr> 49153 49154 Y 18897<br>
Self-heal Daemon on localhost N/A N/A Y 17502<br>
Self-heal Daemon on 10.10.10.41 N/A N/A Y 13353<br>
Self-heal Daemon on 10.10.10.44 N/A N/A Y 32745<br>
<br>
Task Status of Volume GluReplica<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
There are no active volume tasks<u></u><u></u></p>
</div>
<div>
<div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
<div>
<p class="MsoNormal">On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan <<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>> wrote:<u></u><u></u></p>
<div>
<div>
<p class="MsoNormal">I am not able to mount with RDMA over cli....<u></u><u></u></p>
</div>
<p class="MsoNormal" style="margin-bottom:12.0pt">Are there some volfile parameters needs to be tuned?<br>
/usr/bin/mount -t glusterfs -o backup-volfile-servers=10.10.1<wbr>0.44:10.10.10.42:10.10.10.41,t<wbr>ransport=rdma 10.10.10.44:/GluReplica /mnt<br>
<br>
[2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9 (args: /usr/sbin/glusterfs --volfile-server=10.10.10.44 --volfile-server=10.10.10.44 --volfile-server=10.10.10.42
--volfile-server=10.10.10.41 --volfile-server-transport=rdm<wbr>a --volfile-id=/GluReplica.rdma /mnt)<br>
[2017-03-02 11:49:47.812699] I [MSGID: 101190] [event-epoll.c:628:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started thread with index 1<br>
[2017-03-02 11:49:47.825210] I [MSGID: 101190] [event-epoll.c:628:event_dispa<wbr>tch_epoll_worker] 0-epoll: Started thread with index 2<br>
[2017-03-02 11:49:47.828996] W [MSGID: 103071] [rdma.c:4589:__gf_rdma_ctx_cre<wbr>ate] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]<br>
[2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init] 0-GluReplica-client-2: Failed to initialize IB Device<br>
[2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_trans<wbr>port_load] 0-rpc-transport: 'rdma' initialization failed<br>
[2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_conn<wbr>ection_init] 0-GluReplica-client-2: loading of new rpc-transport failed<br>
[2017-03-02 11:49:47.829325] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destr<wbr>oy] 0-GluReplica-client-2: size=588 max=0 total=0<br>
[2017-03-02 11:49:47.829371] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destr<wbr>oy] 0-GluReplica-client-2: size=124 max=0 total=0<br>
[2017-03-02 11:49:47.829391] E [MSGID: 114022] [client.c:2530:client_init_rpc<wbr>] 0-GluReplica-client-2: failed to initialize RPC<br>
[2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2' failed, review your volfile again<br>
[2017-03-02 11:49:47.829425] E [MSGID: 101066] [graph.c:324:glusterfs_graph_i<wbr>nit] 0-GluReplica-client-2: initializing translator failed<br>
[2017-03-02 11:49:47.829436] E [MSGID: 101176] [graph.c:673:glusterfs_graph_a<wbr>ctivate] 0-graph: init failed<br>
[2017-03-02 11:49:47.830003] W [glusterfsd.c:1327:cleanup_and<wbr>_exit] (-->/usr/sbin/glusterfs(mgmt_g<wbr>etspec_cbk+0x3c1) [0x7f524c9dbeb1] -->/usr/sbin/glusterfs(gluster<wbr>fs_process_volfp+0x172) [0x7f524c9d65d2] -->/usr/sbin/glusterfs(cleanup<wbr>_and_exit+0x6b) [0x7f524c9d5b4b]
) 0-: received signum (1), shutting down<br>
[2017-03-02 11:49:47.830053] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting '/mnt'.<br>
[2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and<wbr>_exit] (-->/lib64/libpthread.so.0(+0x<wbr>7dc5) [0x7f524b343dc5] -->/usr/sbin/glusterfs(gluster<wbr>fs_sigwaiter+0xe5) [0x7f524c9d5cd5] -->/usr/sbin/glusterfs(cleanup<wbr>_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-: received
signum (15), shutting down<br>
[2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and<wbr>_exit] (-->/lib64/libpthread.so.0(+0x<wbr>7dc5) [0x7f524b343dc5] -->/usr/sbin/glusterfs(gluster<wbr>fs_sigwaiter+0xe5) [0x7f524c9d5cd5] -->/usr/sbin/glusterfs(cleanup<wbr>_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-: received
signum (15), shutting down<br>
<br>
<u></u><u></u></p>
</div>
<div>
<div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
<div>
<p class="MsoNormal">On Thu, Mar 2, 2017 at 12:11 PM, Sahina Bose <<a href="mailto:sabose@redhat.com" target="_blank">sabose@redhat.com</a>> wrote:<u></u><u></u></p>
<div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">You will need to pass additional mount options while creating the storage domain (transport=rdma)<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">Please let us know if this works.<u></u><u></u></p>
</div>
</div>
<div>
<p class="MsoNormal"> <u></u><u></u></p>
<div>
<div>
<div>
<p class="MsoNormal">On Thu, Mar 2, 2017 at 2:42 PM, Arman Khalatyan <<a href="mailto:arm2arm@gmail.com" target="_blank">arm2arm@gmail.com</a>> wrote:<u></u><u></u></p>
</div>
</div>
<blockquote style="border:none;border-left:solid #cccccc 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0in;margin-bottom:5.0pt">
<div>
<div>
<div>
<div>
<p class="MsoNormal">Hi,<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal">Are there way to force the connections over RDMA only?<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt">If I check host mounts I cannot see rdma mount option:<br>
mount -l| grep gluster<br>
10.10.10.44:/GluReplica on /rhev/data-center/mnt/glusterS<wbr>D/10.10.10.44:_GluReplica type fuse.glusterfs (rw,relatime,user_id=0,group_i<wbr>d=0,default_permissions,allow_<wbr>other,max_read=131072)<u></u><u></u></p>
</div>
<p class="MsoNormal">I have glusterized 3 nodes:
<br>
<br>
GluReplica<br>
Volume ID:<br>
ee686dfe-203a-4caa-a691-263534<wbr>60cc48<br>
Volume Type:<br>
Replicate (Arbiter)<br>
Replica Count:<br>
2 + 1<br>
Number of Bricks:<br>
3<br>
Transport Types:<br>
TCP, RDMA<br>
Maximum no of snapshots:<br>
256<br>
Capacity:<br>
3.51 TiB total, 190.56 GiB used, 3.33 TiB free<u></u><u></u></p>
</div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
</div>
<p class="MsoNormal" style="margin-bottom:12.0pt">______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><u></u><u></u></p>
</blockquote>
</div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
</div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
</div>
</div>
</div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
</div>
</div>
</div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
</div>
</div>
</div>
<p class="MsoNormal"> <u></u><u></u></p>
</div>
</div>
</div>
<div>
<div class="MsoNormal" style="text-align:center" align="center">
<hr size="2" align="center" width="100%">
</div>
</div>
<div>
<p class="MsoNormal">This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact
the sender by reply email and destroy all copies of the original message. <u></u><u></u></p>
</div>
<div>
<div class="MsoNormal" style="text-align:center" align="center">
<hr size="2" align="center" width="100%">
</div>
</div>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
</div></div></div>
</div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>