From jmfrancois at anaxys.com Fri Sep 25 03:02:34 2015 Content-Type: multipart/mixed; boundary="===============3992911080145543233==" MIME-Version: 1.0 From: Jean-Michel FRANCOIS To: users at ovirt.org Subject: [ovirt-users] Cannot mount gluster storage data Date: Fri, 25 Sep 2015 09:02:31 +0200 Message-ID: <5604F187.7020808@anaxys.com> --===============3992911080145543233== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------090400050908040206050504 Content-Type: text/plain; charset=3Dutf-8; format=3Dflowed Content-Transfer-Encoding: 7bit Hi Ovirt users, I'm running ovirt hosted 3.4 with gluster data storage. When I add a new host (Centos 6.6) the data storage (as a glsuterfs) = cannot be mount. I have the following errors in gluster client log file : [2015-09-24 12:27:22.636221] I [MSGID: 101190] = [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread = with index 1 [2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] 0-glusterfs: = readv on 172.16.0.5:24007 failed (No data available) [2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] (--> = /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b] = (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] = (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] = (--> = /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3bb] = (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] = ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) = op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1) [2015-09-24 12:27:22.637333] E [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] = 0-mgmt: failed to fetch volume file (key:/data) [2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] = (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) [0x7f427f8fc1fe] = -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) [0x40d5d2] = -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received = signum (0), shutting down [2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: = Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'. [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] = (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] = -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] = -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received = signum (15), shutting down [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] = (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] = -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] = -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received = signum (15), shutting down And nothing server side. I suppose it is a version issue since on server side I have glusterfs-api-3.6.3-1.el6.x86_64 glusterfs-fuse-3.6.3-1.el6.x86_64 glusterfs-libs-3.6.3-1.el6.x86_64 glusterfs-3.6.3-1.el6.x86_64 glusterfs-cli-3.6.3-1.el6.x86_64 glusterfs-rdma-3.6.3-1.el6.x86_64 glusterfs-server-3.6.3-1.el6.x86_64 and on the new host : glusterfs-3.7.4-2.el6.x86_64 glusterfs-api-3.7.4-2.el6.x86_64 glusterfs-libs-3.7.4-2.el6.x86_64 glusterfs-fuse-3.7.4-2.el6.x86_64 glusterfs-cli-3.7.4-2.el6.x86_64 glusterfs-server-3.7.4-2.el6.x86_64 glusterfs-client-xlators-3.7.4-2.el6.x86_64 glusterfs-rdma-3.7.4-2.el6.x86_64 But since it is a production system, i'm not confident about performing = gluster server upgrade. Mounting a gluster volume as NFS is possible (the engine data storage = has been mounted succesfully). I'm asking here because glusterfs comes from the ovirt3.4 rpm repository. If anyone have a hint to this problem thanks Jean-Michel --------------090400050908040206050504 Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: 7bit
Hi Ovirt users,

I'm running ovirt hosted 3.4 with gluster data storage.
When I add a new host (Centos 6.6) the data storage (as a glsuterfs) cannot be mount.
I have the following errors in gluster client log file :
[2015-09-24 12:27:22.636221] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available)
[2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b] (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8f= c3bb] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1)
[2015-09-24 12:27:22.637333] E [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/data)
[2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (0), shutting down
[2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'.
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (15), shutting down
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (15), shutting down
And nothing server side.

I suppose it is a version issue since on server side I have
glusterfs-api-3.6.3-1.el6.x86_64
glusterfs-fuse-3.6.3-1.el6.x86_64
glusterfs-libs-3.6.3-1.el6.x86_64
glusterfs-3.6.3-1.el6.x86_64
glusterfs-cli-3.6.3-1.el6.x86_64
glusterfs-rdma-3.6.3-1.el6.x86_64
glusterfs-server-3.6.3-1.el6.x86_64

and on the new host :
glusterfs-3.7.4-2.el6.x86_64
glusterfs-api-3.7.4-2.el6.x86_64
glusterfs-libs-3.7.4-2.el6.x86_64
glusterfs-fuse-3.7.4-2.el6.x86_64
glusterfs-cli-3.7.4-2.el6.x86_64
glusterfs-server-3.7.4-2.el6.x86_64
glusterfs-client-xlators-3.7.4-2.el6.x86_64
glusterfs-rdma-3.7.4-2.el6.x86_64

But since it is a production system, i'm not confident about performing gluster server upgrade.
Mounting a gluster volume as NFS is possible (the engine data storage has been mounted succesfully).

I'm asking here because glusterfs comes from the ovirt3.4 rpm repository.

If anyone have a hint to this problem

thanks
Jean-Michel

--------------090400050908040206050504-- --===============3992911080145543233== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wOTA0MDAwNTA5MDgwNDAyMDYwNTA1MDQKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXV0Zi04OyBmb3JtYXQ9Zmxvd2VkCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQK CkhpIE92aXJ0IHVzZXJzLAoKSSdtIHJ1bm5pbmcgb3ZpcnQgaG9zdGVkIDMuNCB3aXRoIGdsdXN0 ZXIgZGF0YSBzdG9yYWdlLgpXaGVuIEkgYWRkIGEgbmV3IGhvc3QgKENlbnRvcyA2LjYpIHRoZSBk YXRhIHN0b3JhZ2UgKGFzIGEgZ2xzdXRlcmZzKSAKY2Fubm90IGJlIG1vdW50LgpJIGhhdmUgdGhl IGZvbGxvd2luZyBlcnJvcnMgaW4gZ2x1c3RlciBjbGllbnQgbG9nIGZpbGUgOgpbMjAxNS0wOS0y NCAxMjoyNzoyMi42MzYyMjFdIEkgW01TR0lEOiAxMDExOTBdIApbZXZlbnQtZXBvbGwuYzo2MzI6 ZXZlbnRfZGlzcGF0Y2hfZXBvbGxfd29ya2VyXSAwLWVwb2xsOiBTdGFydGVkIHRocmVhZCAKd2l0 aCBpbmRleCAxClsyMDE1LTA5LTI0IDEyOjI3OjIyLjYzNjU4OF0gVyBbc29ja2V0LmM6NTg4Ol9f c29ja2V0X3J3dl0gMC1nbHVzdGVyZnM6IApyZWFkdiBvbiAxNzIuMTYuMC41OjI0MDA3IGZhaWxl ZCAoTm8gZGF0YSBhdmFpbGFibGUpClsyMDE1LTA5LTI0IDEyOjI3OjIyLjYzNzMwN10gRSBbcnBj LWNsbnQuYzozNjI6c2F2ZWRfZnJhbWVzX3Vud2luZF0gKC0tPiAKL3Vzci9saWI2NC9saWJnbHVz dGVyZnMuc28uMChfZ2ZfbG9nX2NhbGxpbmdmbisweDFlYilbMHg3ZjQyN2ZiMzA2M2JdIAooLS0+ IC91c3IvbGliNjQvbGliZ2ZycGMuc28uMChzYXZlZF9mcmFtZXNfdW53aW5kKzB4MWU3KVsweDdm NDI3ZjhmYzFkN10gCigtLT4gL3Vzci9saWI2NC9saWJnZnJwYy5zby4wKHNhdmVkX2ZyYW1lc19k ZXN0cm95KzB4ZSlbMHg3ZjQyN2Y4ZmMyZWVdIAooLS0+IAovdXNyL2xpYjY0L2xpYmdmcnBjLnNv LjAocnBjX2NsbnRfY29ubmVjdGlvbl9jbGVhbnVwKzB4YWIpWzB4N2Y0MjdmOGZjM2JiXSAKKC0t PiAvdXNyL2xpYjY0L2xpYmdmcnBjLnNvLjAocnBjX2NsbnRfbm90aWZ5KzB4MWMyKVsweDdmNDI3 ZjhmYzlmMl0gCikpKSkpIDAtZ2x1c3RlcmZzOiBmb3JjZWQgdW53aW5kaW5nIGZyYW1lIHR5cGUo R2x1c3RlckZTIEhhbmRzaGFrZSkgCm9wKEdFVFNQRUMoMikpIGNhbGxlZCBhdCAyMDE1LTA5LTI0 IDEyOjI3OjIyLjYzNjM0NCAoeGlkPTB4MSkKWzIwMTUtMDktMjQgMTI6Mjc6MjIuNjM3MzMzXSBF IFtnbHVzdGVyZnNkLW1nbXQuYzoxNjA0Om1nbXRfZ2V0c3BlY19jYmtdIAowLW1nbXQ6IGZhaWxl ZCB0byBmZXRjaCB2b2x1bWUgZmlsZSAoa2V5Oi9kYXRhKQpbMjAxNS0wOS0yNCAxMjoyNzoyMi42 MzczNjBdIFcgW2dsdXN0ZXJmc2QuYzoxMjE5OmNsZWFudXBfYW5kX2V4aXRdIAooLS0+L3Vzci9s aWI2NC9saWJnZnJwYy5zby4wKHNhdmVkX2ZyYW1lc191bndpbmQrMHgyMGUpIFsweDdmNDI3Zjhm YzFmZV0gCi0tPi91c3Ivc2Jpbi9nbHVzdGVyZnMobWdtdF9nZXRzcGVjX2NiaysweDNmMikgWzB4 NDBkNWQyXSAKLS0+L3Vzci9zYmluL2dsdXN0ZXJmcyhjbGVhbnVwX2FuZF9leGl0KzB4NjUpIFsw eDQwNTliNV0gKSAwLTogcmVjZWl2ZWQgCnNpZ251bSAoMCksIHNodXR0aW5nIGRvd24KWzIwMTUt MDktMjQgMTI6Mjc6MjIuNjM3Mzc1XSBJIFtmdXNlLWJyaWRnZS5jOjU1OTU6ZmluaV0gMC1mdXNl OiAKVW5tb3VudGluZyAnL3JoZXYvZGF0YS1jZW50ZXIvbW50L2dsdXN0ZXJTRC8xNzIuMTYuMC41 Ol9kYXRhJy4KWzIwMTUtMDktMjQgMTI6Mjc6MjIuNjQ2MjQ2XSBXIFtnbHVzdGVyZnNkLmM6MTIx OTpjbGVhbnVwX2FuZF9leGl0XSAKKC0tPi9saWI2NC9saWJwdGhyZWFkLnNvLjAoKzB4N2E1MSkg WzB4N2Y0MjdlYzE4YTUxXSAKLS0+L3Vzci9zYmluL2dsdXN0ZXJmcyhnbHVzdGVyZnNfc2lnd2Fp dGVyKzB4Y2QpIFsweDQwNWU0ZF0gCi0tPi91c3Ivc2Jpbi9nbHVzdGVyZnMoY2xlYW51cF9hbmRf ZXhpdCsweDY1KSBbMHg0MDU5YjVdICkgMC06IHJlY2VpdmVkIApzaWdudW0gKDE1KSwgc2h1dHRp bmcgZG93bgpbMjAxNS0wOS0yNCAxMjoyNzoyMi42NDYyNDZdIFcgW2dsdXN0ZXJmc2QuYzoxMjE5 OmNsZWFudXBfYW5kX2V4aXRdIAooLS0+L2xpYjY0L2xpYnB0aHJlYWQuc28uMCgrMHg3YTUxKSBb MHg3ZjQyN2VjMThhNTFdIAotLT4vdXNyL3NiaW4vZ2x1c3RlcmZzKGdsdXN0ZXJmc19zaWd3YWl0 ZXIrMHhjZCkgWzB4NDA1ZTRkXSAKLS0+L3Vzci9zYmluL2dsdXN0ZXJmcyhjbGVhbnVwX2FuZF9l eGl0KzB4NjUpIFsweDQwNTliNV0gKSAwLTogcmVjZWl2ZWQgCnNpZ251bSAoMTUpLCBzaHV0dGlu ZyBkb3duCkFuZCBub3RoaW5nIHNlcnZlciBzaWRlLgoKSSBzdXBwb3NlIGl0IGlzIGEgdmVyc2lv biBpc3N1ZSBzaW5jZSBvbiBzZXJ2ZXIgc2lkZSBJIGhhdmUKZ2x1c3RlcmZzLWFwaS0zLjYuMy0x LmVsNi54ODZfNjQKZ2x1c3RlcmZzLWZ1c2UtMy42LjMtMS5lbDYueDg2XzY0CmdsdXN0ZXJmcy1s aWJzLTMuNi4zLTEuZWw2Lng4Nl82NApnbHVzdGVyZnMtMy42LjMtMS5lbDYueDg2XzY0CmdsdXN0 ZXJmcy1jbGktMy42LjMtMS5lbDYueDg2XzY0CmdsdXN0ZXJmcy1yZG1hLTMuNi4zLTEuZWw2Lng4 Nl82NApnbHVzdGVyZnMtc2VydmVyLTMuNi4zLTEuZWw2Lng4Nl82NAoKYW5kIG9uIHRoZSBuZXcg aG9zdCA6CmdsdXN0ZXJmcy0zLjcuNC0yLmVsNi54ODZfNjQKZ2x1c3RlcmZzLWFwaS0zLjcuNC0y LmVsNi54ODZfNjQKZ2x1c3RlcmZzLWxpYnMtMy43LjQtMi5lbDYueDg2XzY0CmdsdXN0ZXJmcy1m dXNlLTMuNy40LTIuZWw2Lng4Nl82NApnbHVzdGVyZnMtY2xpLTMuNy40LTIuZWw2Lng4Nl82NApn bHVzdGVyZnMtc2VydmVyLTMuNy40LTIuZWw2Lng4Nl82NApnbHVzdGVyZnMtY2xpZW50LXhsYXRv cnMtMy43LjQtMi5lbDYueDg2XzY0CmdsdXN0ZXJmcy1yZG1hLTMuNy40LTIuZWw2Lng4Nl82NAoK QnV0IHNpbmNlIGl0IGlzIGEgcHJvZHVjdGlvbiBzeXN0ZW0sIGknbSBub3QgY29uZmlkZW50IGFi b3V0IHBlcmZvcm1pbmcgCmdsdXN0ZXIgc2VydmVyIHVwZ3JhZGUuCk1vdW50aW5nIGEgZ2x1c3Rl ciB2b2x1bWUgYXMgTkZTIGlzIHBvc3NpYmxlICh0aGUgZW5naW5lIGRhdGEgc3RvcmFnZSAKaGFz IGJlZW4gbW91bnRlZCBzdWNjZXNmdWxseSkuCgpJJ20gYXNraW5nIGhlcmUgYmVjYXVzZSBnbHVz dGVyZnMgY29tZXMgZnJvbSB0aGUgb3ZpcnQzLjQgcnBtIHJlcG9zaXRvcnkuCgpJZiBhbnlvbmUg aGF2ZSBhIGhpbnQgdG8gdGhpcyBwcm9ibGVtCgp0aGFua3MKSmVhbi1NaWNoZWwKCgotLS0tLS0t LS0tLS0tLTA5MDQwMDA1MDkwODA0MDIwNjA1MDUwNApDb250ZW50LVR5cGU6IHRleHQvaHRtbDsg Y2hhcnNldD11dGYtOApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0Cgo8aHRtbD4KICA8 aGVhZD4KICAgIDxtZXRhIGh0dHAtZXF1aXY9ImNvbnRlbnQtdHlwZSIgY29udGVudD0idGV4dC9o dG1sOyBjaGFyc2V0PXV0Zi04Ij4KICA8L2hlYWQ+CiAgPGJvZHkgdGV4dD0iIzAwMDAwMCIgYmdj b2xvcj0iI0ZGRkZGRiI+CiAgICA8ZGl2IGNsYXNzPSJtb3otdGV4dC1mbG93ZWQiIHN0eWxlPSJm b250LWZhbWlseTogLW1vei1maXhlZDsKICAgICAgZm9udC1zaXplOiAxMnB4OyIgbGFuZz0ieC11 bmljb2RlIj5IaSBPdmlydCB1c2VycywgPGJyPgogICAgICA8YnI+CiAgICAgIEknbSBydW5uaW5n IG92aXJ0IGhvc3RlZCAzLjQgd2l0aCBnbHVzdGVyIGRhdGEgc3RvcmFnZS4gPGJyPgogICAgICBX aGVuIEkgYWRkIGEgbmV3IGhvc3QgKENlbnRvcyA2LjYpIHRoZSBkYXRhIHN0b3JhZ2UgKGFzIGEK ICAgICAgZ2xzdXRlcmZzKSBjYW5ub3QgYmUgbW91bnQuIDxicj4KICAgICAgSSBoYXZlIHRoZSBm b2xsb3dpbmcgZXJyb3JzIGluIGdsdXN0ZXIgY2xpZW50IGxvZyBmaWxlIDogPGJyPgogICAgICBb MjAxNS0wOS0yNCAxMjoyNzoyMi42MzYyMjFdIEkgW01TR0lEOiAxMDExOTBdCiAgICAgIFtldmVu dC1lcG9sbC5jOjYzMjpldmVudF9kaXNwYXRjaF9lcG9sbF93b3JrZXJdIDAtZXBvbGw6IFN0YXJ0 ZWQKICAgICAgdGhyZWFkIHdpdGggaW5kZXggMSA8YnI+CiAgICAgIFsyMDE1LTA5LTI0IDEyOjI3 OjIyLjYzNjU4OF0gVyBbc29ja2V0LmM6NTg4Ol9fc29ja2V0X3J3dl0KICAgICAgMC1nbHVzdGVy ZnM6IHJlYWR2IG9uIDE3Mi4xNi4wLjU6MjQwMDcgZmFpbGVkIChObyBkYXRhIGF2YWlsYWJsZSkK ICAgICAgPGJyPgogICAgICBbMjAxNS0wOS0yNCAxMjoyNzoyMi42MzczMDddIEUKICAgICAgW3Jw Yy1jbG50LmM6MzYyOnNhdmVkX2ZyYW1lc191bndpbmRdICgtLSZndDsKICAgICAgL3Vzci9saWI2 NC9saWJnbHVzdGVyZnMuc28uMChfZ2ZfbG9nX2NhbGxpbmdmbisweDFlYilbMHg3ZjQyN2ZiMzA2 M2JdCiAgICAgICgtLSZndDsKICAgICAgL3Vzci9saWI2NC9saWJnZnJwYy5zby4wKHNhdmVkX2Zy YW1lc191bndpbmQrMHgxZTcpWzB4N2Y0MjdmOGZjMWQ3XQogICAgICAoLS0mZ3Q7CiAgICAgIC91 c3IvbGliNjQvbGliZ2ZycGMuc28uMChzYXZlZF9mcmFtZXNfZGVzdHJveSsweGUpWzB4N2Y0Mjdm OGZjMmVlXQogICAgICAoLS0mZ3Q7CiAgICAgIC91c3IvbGliNjQvbGliZ2ZycGMuc28uMChycGNf Y2xudF9jb25uZWN0aW9uX2NsZWFudXArMHhhYilbMHg3ZjQyN2Y4ZmMzYmJdCgogICAgICAoLS0m Z3Q7CiAgICAgIC91c3IvbGliNjQvbGliZ2ZycGMuc28uMChycGNfY2xudF9ub3RpZnkrMHgxYzIp WzB4N2Y0MjdmOGZjOWYyXQogICAgICApKSkpKSAwLWdsdXN0ZXJmczogZm9yY2VkIHVud2luZGlu ZyBmcmFtZSB0eXBlKEdsdXN0ZXJGUwogICAgICBIYW5kc2hha2UpIG9wKEdFVFNQRUMoMikpIGNh bGxlZCBhdCAyMDE1LTA5LTI0IDEyOjI3OjIyLjYzNjM0NAogICAgICAoeGlkPTB4MSkgPGJyPgog ICAgICBbMjAxNS0wOS0yNCAxMjoyNzoyMi42MzczMzNdIEUKICAgICAgW2dsdXN0ZXJmc2QtbWdt dC5jOjE2MDQ6bWdtdF9nZXRzcGVjX2Nia10gMC1tZ210OiBmYWlsZWQgdG8gZmV0Y2gKICAgICAg dm9sdW1lIGZpbGUgKGtleTovZGF0YSkgPGJyPgogICAgICBbMjAxNS0wOS0yNCAxMjoyNzoyMi42 MzczNjBdIFcKICAgICAgW2dsdXN0ZXJmc2QuYzoxMjE5OmNsZWFudXBfYW5kX2V4aXRdCiAgICAg ICgtLSZndDsvdXNyL2xpYjY0L2xpYmdmcnBjLnNvLjAoc2F2ZWRfZnJhbWVzX3Vud2luZCsweDIw ZSkKICAgICAgWzB4N2Y0MjdmOGZjMWZlXSAtLSZndDsvdXNyL3NiaW4vZ2x1c3RlcmZzKG1nbXRf Z2V0c3BlY19jYmsrMHgzZjIpCiAgICAgIFsweDQwZDVkMl0gLS0mZ3Q7L3Vzci9zYmluL2dsdXN0 ZXJmcyhjbGVhbnVwX2FuZF9leGl0KzB4NjUpCiAgICAgIFsweDQwNTliNV0gKSAwLTogcmVjZWl2 ZWQgc2lnbnVtICgwKSwgc2h1dHRpbmcgZG93biA8YnI+CiAgICAgIFsyMDE1LTA5LTI0IDEyOjI3 OjIyLjYzNzM3NV0gSSBbZnVzZS1icmlkZ2UuYzo1NTk1OmZpbmldIDAtZnVzZToKICAgICAgVW5t b3VudGluZyAnL3JoZXYvZGF0YS1jZW50ZXIvbW50L2dsdXN0ZXJTRC8xNzIuMTYuMC41Ol9kYXRh Jy4gPGJyPgogICAgICBbMjAxNS0wOS0yNCAxMjoyNzoyMi42NDYyNDZdIFcKICAgICAgW2dsdXN0 ZXJmc2QuYzoxMjE5OmNsZWFudXBfYW5kX2V4aXRdCiAgICAgICgtLSZndDsvbGliNjQvbGlicHRo cmVhZC5zby4wKCsweDdhNTEpIFsweDdmNDI3ZWMxOGE1MV0KICAgICAgLS0mZ3Q7L3Vzci9zYmlu L2dsdXN0ZXJmcyhnbHVzdGVyZnNfc2lnd2FpdGVyKzB4Y2QpIFsweDQwNWU0ZF0KICAgICAgLS0m Z3Q7L3Vzci9zYmluL2dsdXN0ZXJmcyhjbGVhbnVwX2FuZF9leGl0KzB4NjUpIFsweDQwNTliNV0g KSAwLToKICAgICAgcmVjZWl2ZWQgc2lnbnVtICgxNSksIHNodXR0aW5nIGRvd24gPGJyPgogICAg ICBbMjAxNS0wOS0yNCAxMjoyNzoyMi42NDYyNDZdIFcKICAgICAgW2dsdXN0ZXJmc2QuYzoxMjE5 OmNsZWFudXBfYW5kX2V4aXRdCiAgICAgICgtLSZndDsvbGliNjQvbGlicHRocmVhZC5zby4wKCsw eDdhNTEpIFsweDdmNDI3ZWMxOGE1MV0KICAgICAgLS0mZ3Q7L3Vzci9zYmluL2dsdXN0ZXJmcyhn bHVzdGVyZnNfc2lnd2FpdGVyKzB4Y2QpIFsweDQwNWU0ZF0KICAgICAgLS0mZ3Q7L3Vzci9zYmlu L2dsdXN0ZXJmcyhjbGVhbnVwX2FuZF9leGl0KzB4NjUpIFsweDQwNTliNV0gKSAwLToKICAgICAg cmVjZWl2ZWQgc2lnbnVtICgxNSksIHNodXR0aW5nIGRvd24gPGJyPgogICAgICBBbmQgbm90aGlu ZyBzZXJ2ZXIgc2lkZS4gPGJyPgogICAgICA8YnI+CiAgICAgIEkgc3VwcG9zZSBpdCBpcyBhIHZl cnNpb24gaXNzdWUgc2luY2Ugb24gc2VydmVyIHNpZGUgSSBoYXZlIDxicj4KICAgICAgZ2x1c3Rl cmZzLWFwaS0zLjYuMy0xLmVsNi54ODZfNjQgPGJyPgogICAgICBnbHVzdGVyZnMtZnVzZS0zLjYu My0xLmVsNi54ODZfNjQgPGJyPgogICAgICBnbHVzdGVyZnMtbGlicy0zLjYuMy0xLmVsNi54ODZf NjQgPGJyPgogICAgICBnbHVzdGVyZnMtMy42LjMtMS5lbDYueDg2XzY0IDxicj4KICAgICAgZ2x1 c3RlcmZzLWNsaS0zLjYuMy0xLmVsNi54ODZfNjQgPGJyPgogICAgICBnbHVzdGVyZnMtcmRtYS0z LjYuMy0xLmVsNi54ODZfNjQgPGJyPgogICAgICBnbHVzdGVyZnMtc2VydmVyLTMuNi4zLTEuZWw2 Lng4Nl82NCA8YnI+CiAgICAgIDxicj4KICAgICAgYW5kIG9uIHRoZSBuZXcgaG9zdCA6IDxicj4K ICAgICAgZ2x1c3RlcmZzLTMuNy40LTIuZWw2Lng4Nl82NCA8YnI+CiAgICAgIGdsdXN0ZXJmcy1h cGktMy43LjQtMi5lbDYueDg2XzY0IDxicj4KICAgICAgZ2x1c3RlcmZzLWxpYnMtMy43LjQtMi5l bDYueDg2XzY0IDxicj4KICAgICAgZ2x1c3RlcmZzLWZ1c2UtMy43LjQtMi5lbDYueDg2XzY0IDxi cj4KICAgICAgZ2x1c3RlcmZzLWNsaS0zLjcuNC0yLmVsNi54ODZfNjQgPGJyPgogICAgICBnbHVz dGVyZnMtc2VydmVyLTMuNy40LTIuZWw2Lng4Nl82NCA8YnI+CiAgICAgIGdsdXN0ZXJmcy1jbGll bnQteGxhdG9ycy0zLjcuNC0yLmVsNi54ODZfNjQgPGJyPgogICAgICBnbHVzdGVyZnMtcmRtYS0z LjcuNC0yLmVsNi54ODZfNjQgPGJyPgogICAgICA8YnI+CiAgICAgIEJ1dCBzaW5jZSBpdCBpcyBh IHByb2R1Y3Rpb24gc3lzdGVtLCBpJ20gbm90IGNvbmZpZGVudCBhYm91dAogICAgICBwZXJmb3Jt aW5nIGdsdXN0ZXIgc2VydmVyIHVwZ3JhZGUuIDxicj4KICAgICAgTW91bnRpbmcgYSBnbHVzdGVy IHZvbHVtZSBhcyBORlMgaXMgcG9zc2libGUgKHRoZSBlbmdpbmUgZGF0YQogICAgICBzdG9yYWdl IGhhcyBiZWVuIG1vdW50ZWQgc3VjY2VzZnVsbHkpLiA8YnI+CiAgICAgIDxicj4KICAgICAgSSdt IGFza2luZyBoZXJlIGJlY2F1c2UgZ2x1c3RlcmZzIGNvbWVzIGZyb20gdGhlIG92aXJ0My40IHJw bQogICAgICByZXBvc2l0b3J5LiA8YnI+CiAgICAgIDxicj4KICAgICAgSWYgYW55b25lIGhhdmUg YSBoaW50IHRvIHRoaXMgcHJvYmxlbSA8YnI+CiAgICAgIDxicj4KICAgICAgdGhhbmtzIDxicj4K ICAgICAgSmVhbi1NaWNoZWwgPGJyPgogICAgICA8YnI+CiAgICA8L2Rpdj4KICA8L2JvZHk+Cjwv aHRtbD4KCi0tLS0tLS0tLS0tLS0tMDkwNDAwMDUwOTA4MDQwMjA2MDUwNTA0LS0K --===============3992911080145543233==-- From ravishankar at redhat.com Fri Sep 25 03:55:55 2015 Content-Type: multipart/mixed; boundary="===============2504470031150465987==" MIME-Version: 1.0 From: Ravishankar N To: users at ovirt.org Subject: Re: [ovirt-users] Cannot mount gluster storage data Date: Fri, 25 Sep 2015 13:25:45 +0530 Message-ID: <5604FE01.7020309@redhat.com> In-Reply-To: 5604F187.7020808@anaxys.com --===============2504470031150465987== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------090707010903090605090008 Content-Type: text/plain; charset=3Dwindows-1252; format=3Dflowed Content-Transfer-Encoding: 7bit On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote: > Hi Ovirt users, > > I'm running ovirt hosted 3.4 with gluster data storage. > When I add a new host (Centos 6.6) the data storage (as a glsuterfs) = > cannot be mount. > I have the following errors in gluster client log file : > [2015-09-24 12:27:22.636221] I [MSGID: 101190] = > [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started = > thread with index 1 > [2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] = > 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available) > [2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] = > (--> = > /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b] = > (--> = > /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] = > (--> = > /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] = > (--> = > /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3bb= ] = > (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] = > ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) = > op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1) > [2015-09-24 12:27:22.637333] E = > [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch = > volume file (key:/data) > [2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] = > (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) = > [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) = > [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) = > 0-: received signum (0), shutting down > [2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: = > Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'. > [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] = > (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] = > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] = > -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: = > received signum (15), shutting down > [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] = > (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] = > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] = > -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: = > received signum (15), shutting down > And nothing server side. > This does look like an op-version issue. Adding Atin for any possible help. -Ravi > I suppose it is a version issue since on server side I have > glusterfs-api-3.6.3-1.el6.x86_64 > glusterfs-fuse-3.6.3-1.el6.x86_64 > glusterfs-libs-3.6.3-1.el6.x86_64 > glusterfs-3.6.3-1.el6.x86_64 > glusterfs-cli-3.6.3-1.el6.x86_64 > glusterfs-rdma-3.6.3-1.el6.x86_64 > glusterfs-server-3.6.3-1.el6.x86_64 > > and on the new host : > glusterfs-3.7.4-2.el6.x86_64 > glusterfs-api-3.7.4-2.el6.x86_64 > glusterfs-libs-3.7.4-2.el6.x86_64 > glusterfs-fuse-3.7.4-2.el6.x86_64 > glusterfs-cli-3.7.4-2.el6.x86_64 > glusterfs-server-3.7.4-2.el6.x86_64 > glusterfs-client-xlators-3.7.4-2.el6.x86_64 > glusterfs-rdma-3.7.4-2.el6.x86_64 > > But since it is a production system, i'm not confident about = > performing gluster server upgrade. > Mounting a gluster volume as NFS is possible (the engine data storage = > has been mounted succesfully). > > I'm asking here because glusterfs comes from the ovirt3.4 rpm repository. > > If anyone have a hint to this problem > > thanks > Jean-Michel > > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --------------090707010903090605090008 Content-Type: text/html; charset=3Dwindows-1252 Content-Transfer-Encoding: 7bit

On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote:
Hi Ovirt users,

I'm running ovirt hosted 3.4 with gluster data storage.
When I add a new host (Centos 6.6) the data storage (as a glsuterfs) cannot be mount.
I have the following errors in gluster client log file :
[2015-09-24 12:27:22.636221] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available)
[2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063= b] (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f= 8fc3bb] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1)
[2015-09-24 12:27:22.637333] E [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/data)
[2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (0), shutting down
[2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'.
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (15), shutting down
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (15), shutting down
And nothing server side.


This does look like an op-version issue. Adding Atin for any possible help.
-Ravi

I suppose it is a version issue since on server side I have
glusterfs-api-3.6.3-1.el6.x86_64
glusterfs-fuse-3.6.3-1.el6.x86_64
glusterfs-libs-3.6.3-1.el6.x86_64
glusterfs-3.6.3-1.el6.x86_64
glusterfs-cli-3.6.3-1.el6.x86_64
glusterfs-rdma-3.6.3-1.el6.x86_64
glusterfs-server-3.6.3-1.el6.x86_64

and on the new host :
glusterfs-3.7.4-2.el6.x86_64
glusterfs-api-3.7.4-2.el6.x86_64
glusterfs-libs-3.7.4-2.el6.x86_64
glusterfs-fuse-3.7.4-2.el6.x86_64
glusterfs-cli-3.7.4-2.el6.x86_64
glusterfs-server-3.7.4-2.el6.x86_64
glusterfs-client-xlators-3.7.4-2.el6.x86_64
glusterfs-rdma-3.7.4-2.el6.x86_64

But since it is a production system, i'm not confident about performing gluster server upgrade.
Mounting a gluster volume as NFS is possible (the engine data storage has been mounted succesfully).

I'm asking here because glusterfs comes from the ovirt3.4 rpm repository.

If anyone have a hint to this problem

thanks
Jean-Michel



_______________________________________________
Users mailing list
Use=
rs(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

--------------090707010903090605090008-- --===============2504470031150465987== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wOTA3MDcwMTA5MDMwOTA2MDUwOTAwMDgKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXdpbmRvd3MtMTI1MjsgZm9ybWF0PWZsb3dlZApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5n OiA3Yml0CgoKCk9uIDA5LzI1LzIwMTUgMTI6MzIgUE0sIEplYW4tTWljaGVsIEZSQU5DT0lTIHdy b3RlOgo+IEhpIE92aXJ0IHVzZXJzLAo+Cj4gSSdtIHJ1bm5pbmcgb3ZpcnQgaG9zdGVkIDMuNCB3 aXRoIGdsdXN0ZXIgZGF0YSBzdG9yYWdlLgo+IFdoZW4gSSBhZGQgYSBuZXcgaG9zdCAoQ2VudG9z IDYuNikgdGhlIGRhdGEgc3RvcmFnZSAoYXMgYSBnbHN1dGVyZnMpIAo+IGNhbm5vdCBiZSBtb3Vu dC4KPiBJIGhhdmUgdGhlIGZvbGxvd2luZyBlcnJvcnMgaW4gZ2x1c3RlciBjbGllbnQgbG9nIGZp bGUgOgo+IFsyMDE1LTA5LTI0IDEyOjI3OjIyLjYzNjIyMV0gSSBbTVNHSUQ6IDEwMTE5MF0gCj4g W2V2ZW50LWVwb2xsLmM6NjMyOmV2ZW50X2Rpc3BhdGNoX2Vwb2xsX3dvcmtlcl0gMC1lcG9sbDog U3RhcnRlZCAKPiB0aHJlYWQgd2l0aCBpbmRleCAxCj4gWzIwMTUtMDktMjQgMTI6Mjc6MjIuNjM2 NTg4XSBXIFtzb2NrZXQuYzo1ODg6X19zb2NrZXRfcnd2XSAKPiAwLWdsdXN0ZXJmczogcmVhZHYg b24gMTcyLjE2LjAuNToyNDAwNyBmYWlsZWQgKE5vIGRhdGEgYXZhaWxhYmxlKQo+IFsyMDE1LTA5 LTI0IDEyOjI3OjIyLjYzNzMwN10gRSBbcnBjLWNsbnQuYzozNjI6c2F2ZWRfZnJhbWVzX3Vud2lu ZF0gCj4gKC0tPiAKPiAvdXNyL2xpYjY0L2xpYmdsdXN0ZXJmcy5zby4wKF9nZl9sb2dfY2FsbGlu Z2ZuKzB4MWViKVsweDdmNDI3ZmIzMDYzYl0gCj4gKC0tPiAKPiAvdXNyL2xpYjY0L2xpYmdmcnBj LnNvLjAoc2F2ZWRfZnJhbWVzX3Vud2luZCsweDFlNylbMHg3ZjQyN2Y4ZmMxZDddIAo+ICgtLT4g Cj4gL3Vzci9saWI2NC9saWJnZnJwYy5zby4wKHNhdmVkX2ZyYW1lc19kZXN0cm95KzB4ZSlbMHg3 ZjQyN2Y4ZmMyZWVdIAo+ICgtLT4gCj4gL3Vzci9saWI2NC9saWJnZnJwYy5zby4wKHJwY19jbG50 X2Nvbm5lY3Rpb25fY2xlYW51cCsweGFiKVsweDdmNDI3ZjhmYzNiYl0gCj4gKC0tPiAvdXNyL2xp YjY0L2xpYmdmcnBjLnNvLjAocnBjX2NsbnRfbm90aWZ5KzB4MWMyKVsweDdmNDI3ZjhmYzlmMl0g Cj4gKSkpKSkgMC1nbHVzdGVyZnM6IGZvcmNlZCB1bndpbmRpbmcgZnJhbWUgdHlwZShHbHVzdGVy RlMgSGFuZHNoYWtlKSAKPiBvcChHRVRTUEVDKDIpKSBjYWxsZWQgYXQgMjAxNS0wOS0yNCAxMjoy NzoyMi42MzYzNDQgKHhpZD0weDEpCj4gWzIwMTUtMDktMjQgMTI6Mjc6MjIuNjM3MzMzXSBFIAo+ IFtnbHVzdGVyZnNkLW1nbXQuYzoxNjA0Om1nbXRfZ2V0c3BlY19jYmtdIDAtbWdtdDogZmFpbGVk IHRvIGZldGNoIAo+IHZvbHVtZSBmaWxlIChrZXk6L2RhdGEpCj4gWzIwMTUtMDktMjQgMTI6Mjc6 MjIuNjM3MzYwXSBXIFtnbHVzdGVyZnNkLmM6MTIxOTpjbGVhbnVwX2FuZF9leGl0XSAKPiAoLS0+ L3Vzci9saWI2NC9saWJnZnJwYy5zby4wKHNhdmVkX2ZyYW1lc191bndpbmQrMHgyMGUpIAo+IFsw eDdmNDI3ZjhmYzFmZV0gLS0+L3Vzci9zYmluL2dsdXN0ZXJmcyhtZ210X2dldHNwZWNfY2JrKzB4 M2YyKSAKPiBbMHg0MGQ1ZDJdIC0tPi91c3Ivc2Jpbi9nbHVzdGVyZnMoY2xlYW51cF9hbmRfZXhp dCsweDY1KSBbMHg0MDU5YjVdICkgCj4gMC06IHJlY2VpdmVkIHNpZ251bSAoMCksIHNodXR0aW5n IGRvd24KPiBbMjAxNS0wOS0yNCAxMjoyNzoyMi42MzczNzVdIEkgW2Z1c2UtYnJpZGdlLmM6NTU5 NTpmaW5pXSAwLWZ1c2U6IAo+IFVubW91bnRpbmcgJy9yaGV2L2RhdGEtY2VudGVyL21udC9nbHVz dGVyU0QvMTcyLjE2LjAuNTpfZGF0YScuCj4gWzIwMTUtMDktMjQgMTI6Mjc6MjIuNjQ2MjQ2XSBX IFtnbHVzdGVyZnNkLmM6MTIxOTpjbGVhbnVwX2FuZF9leGl0XSAKPiAoLS0+L2xpYjY0L2xpYnB0 aHJlYWQuc28uMCgrMHg3YTUxKSBbMHg3ZjQyN2VjMThhNTFdIAo+IC0tPi91c3Ivc2Jpbi9nbHVz dGVyZnMoZ2x1c3RlcmZzX3NpZ3dhaXRlcisweGNkKSBbMHg0MDVlNGRdIAo+IC0tPi91c3Ivc2Jp bi9nbHVzdGVyZnMoY2xlYW51cF9hbmRfZXhpdCsweDY1KSBbMHg0MDU5YjVdICkgMC06IAo+IHJl Y2VpdmVkIHNpZ251bSAoMTUpLCBzaHV0dGluZyBkb3duCj4gWzIwMTUtMDktMjQgMTI6Mjc6MjIu NjQ2MjQ2XSBXIFtnbHVzdGVyZnNkLmM6MTIxOTpjbGVhbnVwX2FuZF9leGl0XSAKPiAoLS0+L2xp YjY0L2xpYnB0aHJlYWQuc28uMCgrMHg3YTUxKSBbMHg3ZjQyN2VjMThhNTFdIAo+IC0tPi91c3Iv c2Jpbi9nbHVzdGVyZnMoZ2x1c3RlcmZzX3NpZ3dhaXRlcisweGNkKSBbMHg0MDVlNGRdIAo+IC0t Pi91c3Ivc2Jpbi9nbHVzdGVyZnMoY2xlYW51cF9hbmRfZXhpdCsweDY1KSBbMHg0MDU5YjVdICkg MC06IAo+IHJlY2VpdmVkIHNpZ251bSAoMTUpLCBzaHV0dGluZyBkb3duCj4gQW5kIG5vdGhpbmcg c2VydmVyIHNpZGUuCj4KClRoaXMgZG9lcyBsb29rIGxpa2UgYW4gb3AtdmVyc2lvbiBpc3N1ZS4g QWRkaW5nIEF0aW4gZm9yIGFueSBwb3NzaWJsZSBoZWxwLgotUmF2aQoKPiBJIHN1cHBvc2UgaXQg aXMgYSB2ZXJzaW9uIGlzc3VlIHNpbmNlIG9uIHNlcnZlciBzaWRlIEkgaGF2ZQo+IGdsdXN0ZXJm cy1hcGktMy42LjMtMS5lbDYueDg2XzY0Cj4gZ2x1c3RlcmZzLWZ1c2UtMy42LjMtMS5lbDYueDg2 XzY0Cj4gZ2x1c3RlcmZzLWxpYnMtMy42LjMtMS5lbDYueDg2XzY0Cj4gZ2x1c3RlcmZzLTMuNi4z LTEuZWw2Lng4Nl82NAo+IGdsdXN0ZXJmcy1jbGktMy42LjMtMS5lbDYueDg2XzY0Cj4gZ2x1c3Rl cmZzLXJkbWEtMy42LjMtMS5lbDYueDg2XzY0Cj4gZ2x1c3RlcmZzLXNlcnZlci0zLjYuMy0xLmVs Ni54ODZfNjQKPgo+IGFuZCBvbiB0aGUgbmV3IGhvc3QgOgo+IGdsdXN0ZXJmcy0zLjcuNC0yLmVs Ni54ODZfNjQKPiBnbHVzdGVyZnMtYXBpLTMuNy40LTIuZWw2Lng4Nl82NAo+IGdsdXN0ZXJmcy1s aWJzLTMuNy40LTIuZWw2Lng4Nl82NAo+IGdsdXN0ZXJmcy1mdXNlLTMuNy40LTIuZWw2Lng4Nl82 NAo+IGdsdXN0ZXJmcy1jbGktMy43LjQtMi5lbDYueDg2XzY0Cj4gZ2x1c3RlcmZzLXNlcnZlci0z LjcuNC0yLmVsNi54ODZfNjQKPiBnbHVzdGVyZnMtY2xpZW50LXhsYXRvcnMtMy43LjQtMi5lbDYu eDg2XzY0Cj4gZ2x1c3RlcmZzLXJkbWEtMy43LjQtMi5lbDYueDg2XzY0Cj4KPiBCdXQgc2luY2Ug aXQgaXMgYSBwcm9kdWN0aW9uIHN5c3RlbSwgaSdtIG5vdCBjb25maWRlbnQgYWJvdXQgCj4gcGVy Zm9ybWluZyBnbHVzdGVyIHNlcnZlciB1cGdyYWRlLgo+IE1vdW50aW5nIGEgZ2x1c3RlciB2b2x1 bWUgYXMgTkZTIGlzIHBvc3NpYmxlICh0aGUgZW5naW5lIGRhdGEgc3RvcmFnZSAKPiBoYXMgYmVl biBtb3VudGVkIHN1Y2Nlc2Z1bGx5KS4KPgo+IEknbSBhc2tpbmcgaGVyZSBiZWNhdXNlIGdsdXN0 ZXJmcyBjb21lcyBmcm9tIHRoZSBvdmlydDMuNCBycG0gcmVwb3NpdG9yeS4KPgo+IElmIGFueW9u ZSBoYXZlIGEgaGludCB0byB0aGlzIHByb2JsZW0KPgo+IHRoYW5rcwo+IEplYW4tTWljaGVsCj4K Pgo+Cj4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBV c2VycyBtYWlsaW5nIGxpc3QKPiBVc2Vyc0BvdmlydC5vcmcKPiBodHRwOi8vbGlzdHMub3ZpcnQu b3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMKCgotLS0tLS0tLS0tLS0tLTA5MDcwNzAxMDkwMzA5 MDYwNTA5MDAwOApDb250ZW50LVR5cGU6IHRleHQvaHRtbDsgY2hhcnNldD13aW5kb3dzLTEyNTIK Q29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdAoKPGh0bWw+CiAgPGhlYWQ+CiAgICA8bWV0 YSBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9d2luZG93cy0xMjUyIgogICAgICBodHRwLWVx dWl2PSJDb250ZW50LVR5cGUiPgogIDwvaGVhZD4KICA8Ym9keSB0ZXh0PSIjMDAwMDAwIiBiZ2Nv bG9yPSIjRkZGRkZGIj4KICAgIDxicj4KICAgIDxicj4KICAgIDxkaXYgY2xhc3M9Im1vei1jaXRl LXByZWZpeCI+T24gMDkvMjUvMjAxNSAxMjozMiBQTSwgSmVhbi1NaWNoZWwKICAgICAgRlJBTkNP SVMgd3JvdGU6PGJyPgogICAgPC9kaXY+CiAgICA8YmxvY2txdW90ZSBjaXRlPSJtaWQ6NTYwNEYx ODcuNzAyMDgwOEBhbmF4eXMuY29tIiB0eXBlPSJjaXRlIj4KICAgICAgPG1ldGEgaHR0cC1lcXVp dj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7CiAgICAgICAgY2hhcnNldD13aW5k b3dzLTEyNTIiPgogICAgICA8ZGl2IGNsYXNzPSJtb3otdGV4dC1mbG93ZWQiIHN0eWxlPSJmb250 LWZhbWlseTogLW1vei1maXhlZDsKICAgICAgICBmb250LXNpemU6IDEycHg7IiBsYW5nPSJ4LXVu aWNvZGUiPkhpIE92aXJ0IHVzZXJzLCA8YnI+CiAgICAgICAgPGJyPgogICAgICAgIEknbSBydW5u aW5nIG92aXJ0IGhvc3RlZCAzLjQgd2l0aCBnbHVzdGVyIGRhdGEgc3RvcmFnZS4gPGJyPgogICAg ICAgIFdoZW4gSSBhZGQgYSBuZXcgaG9zdCAoQ2VudG9zIDYuNikgdGhlIGRhdGEgc3RvcmFnZSAo YXMgYQogICAgICAgIGdsc3V0ZXJmcykgY2Fubm90IGJlIG1vdW50LiA8YnI+CiAgICAgICAgSSBo YXZlIHRoZSBmb2xsb3dpbmcgZXJyb3JzIGluIGdsdXN0ZXIgY2xpZW50IGxvZyBmaWxlIDogPGJy PgogICAgICAgIFsyMDE1LTA5LTI0IDEyOjI3OjIyLjYzNjIyMV0gSSBbTVNHSUQ6IDEwMTE5MF0K ICAgICAgICBbZXZlbnQtZXBvbGwuYzo2MzI6ZXZlbnRfZGlzcGF0Y2hfZXBvbGxfd29ya2VyXSAw LWVwb2xsOiBTdGFydGVkCiAgICAgICAgdGhyZWFkIHdpdGggaW5kZXggMSA8YnI+CiAgICAgICAg WzIwMTUtMDktMjQgMTI6Mjc6MjIuNjM2NTg4XSBXIFtzb2NrZXQuYzo1ODg6X19zb2NrZXRfcnd2 XQogICAgICAgIDAtZ2x1c3RlcmZzOiByZWFkdiBvbiAxNzIuMTYuMC41OjI0MDA3IGZhaWxlZCAo Tm8gZGF0YQogICAgICAgIGF2YWlsYWJsZSkgPGJyPgogICAgICAgIFsyMDE1LTA5LTI0IDEyOjI3 OjIyLjYzNzMwN10gRQogICAgICAgIFtycGMtY2xudC5jOjM2MjpzYXZlZF9mcmFtZXNfdW53aW5k XSAoLS0mZ3Q7CiAgICAgICAgL3Vzci9saWI2NC9saWJnbHVzdGVyZnMuc28uMChfZ2ZfbG9nX2Nh bGxpbmdmbisweDFlYilbMHg3ZjQyN2ZiMzA2M2JdCiAgICAgICAgKC0tJmd0OwogICAgICAgIC91 c3IvbGliNjQvbGliZ2ZycGMuc28uMChzYXZlZF9mcmFtZXNfdW53aW5kKzB4MWU3KVsweDdmNDI3 ZjhmYzFkN10KICAgICAgICAoLS0mZ3Q7CiAgICAgICAgL3Vzci9saWI2NC9saWJnZnJwYy5zby4w KHNhdmVkX2ZyYW1lc19kZXN0cm95KzB4ZSlbMHg3ZjQyN2Y4ZmMyZWVdCiAgICAgICAgKC0tJmd0 OwogICAgICAgIC91c3IvbGliNjQvbGliZ2ZycGMuc28uMChycGNfY2xudF9jb25uZWN0aW9uX2Ns ZWFudXArMHhhYilbMHg3ZjQyN2Y4ZmMzYmJdCgoKICAgICAgICAoLS0mZ3Q7CiAgICAgICAgL3Vz ci9saWI2NC9saWJnZnJwYy5zby4wKHJwY19jbG50X25vdGlmeSsweDFjMilbMHg3ZjQyN2Y4ZmM5 ZjJdCiAgICAgICAgKSkpKSkgMC1nbHVzdGVyZnM6IGZvcmNlZCB1bndpbmRpbmcgZnJhbWUgdHlw ZShHbHVzdGVyRlMKICAgICAgICBIYW5kc2hha2UpIG9wKEdFVFNQRUMoMikpIGNhbGxlZCBhdCAy MDE1LTA5LTI0IDEyOjI3OjIyLjYzNjM0NAogICAgICAgICh4aWQ9MHgxKSA8YnI+CiAgICAgICAg WzIwMTUtMDktMjQgMTI6Mjc6MjIuNjM3MzMzXSBFCiAgICAgICAgW2dsdXN0ZXJmc2QtbWdtdC5j OjE2MDQ6bWdtdF9nZXRzcGVjX2Nia10gMC1tZ210OiBmYWlsZWQgdG8KICAgICAgICBmZXRjaCB2 b2x1bWUgZmlsZSAoa2V5Oi9kYXRhKSA8YnI+CiAgICAgICAgWzIwMTUtMDktMjQgMTI6Mjc6MjIu NjM3MzYwXSBXCiAgICAgICAgW2dsdXN0ZXJmc2QuYzoxMjE5OmNsZWFudXBfYW5kX2V4aXRdCiAg ICAgICAgKC0tJmd0Oy91c3IvbGliNjQvbGliZ2ZycGMuc28uMChzYXZlZF9mcmFtZXNfdW53aW5k KzB4MjBlKQogICAgICAgIFsweDdmNDI3ZjhmYzFmZV0KICAgICAgICAtLSZndDsvdXNyL3NiaW4v Z2x1c3RlcmZzKG1nbXRfZ2V0c3BlY19jYmsrMHgzZjIpIFsweDQwZDVkMl0KICAgICAgICAtLSZn dDsvdXNyL3NiaW4vZ2x1c3RlcmZzKGNsZWFudXBfYW5kX2V4aXQrMHg2NSkgWzB4NDA1OWI1XSAp CiAgICAgICAgMC06IHJlY2VpdmVkIHNpZ251bSAoMCksIHNodXR0aW5nIGRvd24gPGJyPgogICAg ICAgIFsyMDE1LTA5LTI0IDEyOjI3OjIyLjYzNzM3NV0gSSBbZnVzZS1icmlkZ2UuYzo1NTk1OmZp bmldIDAtZnVzZToKICAgICAgICBVbm1vdW50aW5nICcvcmhldi9kYXRhLWNlbnRlci9tbnQvZ2x1 c3RlclNELzE3Mi4xNi4wLjU6X2RhdGEnLiA8YnI+CiAgICAgICAgWzIwMTUtMDktMjQgMTI6Mjc6 MjIuNjQ2MjQ2XSBXCiAgICAgICAgW2dsdXN0ZXJmc2QuYzoxMjE5OmNsZWFudXBfYW5kX2V4aXRd CiAgICAgICAgKC0tJmd0Oy9saWI2NC9saWJwdGhyZWFkLnNvLjAoKzB4N2E1MSkgWzB4N2Y0Mjdl YzE4YTUxXQogICAgICAgIC0tJmd0Oy91c3Ivc2Jpbi9nbHVzdGVyZnMoZ2x1c3RlcmZzX3NpZ3dh aXRlcisweGNkKSBbMHg0MDVlNGRdCiAgICAgICAgLS0mZ3Q7L3Vzci9zYmluL2dsdXN0ZXJmcyhj bGVhbnVwX2FuZF9leGl0KzB4NjUpIFsweDQwNTliNV0gKQogICAgICAgIDAtOiByZWNlaXZlZCBz aWdudW0gKDE1KSwgc2h1dHRpbmcgZG93biA8YnI+CiAgICAgICAgWzIwMTUtMDktMjQgMTI6Mjc6 MjIuNjQ2MjQ2XSBXCiAgICAgICAgW2dsdXN0ZXJmc2QuYzoxMjE5OmNsZWFudXBfYW5kX2V4aXRd CiAgICAgICAgKC0tJmd0Oy9saWI2NC9saWJwdGhyZWFkLnNvLjAoKzB4N2E1MSkgWzB4N2Y0Mjdl YzE4YTUxXQogICAgICAgIC0tJmd0Oy91c3Ivc2Jpbi9nbHVzdGVyZnMoZ2x1c3RlcmZzX3NpZ3dh aXRlcisweGNkKSBbMHg0MDVlNGRdCiAgICAgICAgLS0mZ3Q7L3Vzci9zYmluL2dsdXN0ZXJmcyhj bGVhbnVwX2FuZF9leGl0KzB4NjUpIFsweDQwNTliNV0gKQogICAgICAgIDAtOiByZWNlaXZlZCBz aWdudW0gKDE1KSwgc2h1dHRpbmcgZG93biA8YnI+CiAgICAgICAgQW5kIG5vdGhpbmcgc2VydmVy IHNpZGUuIDxicj4KICAgICAgICA8YnI+CiAgICAgIDwvZGl2PgogICAgPC9ibG9ja3F1b3RlPgog ICAgPGJyPgogICAgVGhpcyBkb2VzIGxvb2sgbGlrZSBhbiBvcC12ZXJzaW9uIGlzc3VlLiBBZGRp bmcgQXRpbiBmb3IgYW55CiAgICBwb3NzaWJsZSBoZWxwLjxicj4KICAgIC1SYXZpPGJyPgogICAg PGJyPgogICAgPGJsb2NrcXVvdGUgY2l0ZT0ibWlkOjU2MDRGMTg3LjcwMjA4MDhAYW5heHlzLmNv bSIgdHlwZT0iY2l0ZSI+CiAgICAgIDxkaXYgY2xhc3M9Im1vei10ZXh0LWZsb3dlZCIgc3R5bGU9 ImZvbnQtZmFtaWx5OiAtbW96LWZpeGVkOwogICAgICAgIGZvbnQtc2l6ZTogMTJweDsiIGxhbmc9 IngtdW5pY29kZSI+IEkgc3VwcG9zZSBpdCBpcyBhIHZlcnNpb24KICAgICAgICBpc3N1ZSBzaW5j ZSBvbiBzZXJ2ZXIgc2lkZSBJIGhhdmUgPGJyPgogICAgICAgIGdsdXN0ZXJmcy1hcGktMy42LjMt MS5lbDYueDg2XzY0IDxicj4KICAgICAgICBnbHVzdGVyZnMtZnVzZS0zLjYuMy0xLmVsNi54ODZf NjQgPGJyPgogICAgICAgIGdsdXN0ZXJmcy1saWJzLTMuNi4zLTEuZWw2Lng4Nl82NCA8YnI+CiAg ICAgICAgZ2x1c3RlcmZzLTMuNi4zLTEuZWw2Lng4Nl82NCA8YnI+CiAgICAgICAgZ2x1c3RlcmZz LWNsaS0zLjYuMy0xLmVsNi54ODZfNjQgPGJyPgogICAgICAgIGdsdXN0ZXJmcy1yZG1hLTMuNi4z LTEuZWw2Lng4Nl82NCA8YnI+CiAgICAgICAgZ2x1c3RlcmZzLXNlcnZlci0zLjYuMy0xLmVsNi54 ODZfNjQgPGJyPgogICAgICAgIDxicj4KICAgICAgICBhbmQgb24gdGhlIG5ldyBob3N0IDogPGJy PgogICAgICAgIGdsdXN0ZXJmcy0zLjcuNC0yLmVsNi54ODZfNjQgPGJyPgogICAgICAgIGdsdXN0 ZXJmcy1hcGktMy43LjQtMi5lbDYueDg2XzY0IDxicj4KICAgICAgICBnbHVzdGVyZnMtbGlicy0z LjcuNC0yLmVsNi54ODZfNjQgPGJyPgogICAgICAgIGdsdXN0ZXJmcy1mdXNlLTMuNy40LTIuZWw2 Lng4Nl82NCA8YnI+CiAgICAgICAgZ2x1c3RlcmZzLWNsaS0zLjcuNC0yLmVsNi54ODZfNjQgPGJy PgogICAgICAgIGdsdXN0ZXJmcy1zZXJ2ZXItMy43LjQtMi5lbDYueDg2XzY0IDxicj4KICAgICAg ICBnbHVzdGVyZnMtY2xpZW50LXhsYXRvcnMtMy43LjQtMi5lbDYueDg2XzY0IDxicj4KICAgICAg ICBnbHVzdGVyZnMtcmRtYS0zLjcuNC0yLmVsNi54ODZfNjQgPGJyPgogICAgICAgIDxicj4KICAg ICAgICBCdXQgc2luY2UgaXQgaXMgYSBwcm9kdWN0aW9uIHN5c3RlbSwgaSdtIG5vdCBjb25maWRl bnQgYWJvdXQKICAgICAgICBwZXJmb3JtaW5nIGdsdXN0ZXIgc2VydmVyIHVwZ3JhZGUuIDxicj4K ICAgICAgICBNb3VudGluZyBhIGdsdXN0ZXIgdm9sdW1lIGFzIE5GUyBpcyBwb3NzaWJsZSAodGhl IGVuZ2luZSBkYXRhCiAgICAgICAgc3RvcmFnZSBoYXMgYmVlbiBtb3VudGVkIHN1Y2Nlc2Z1bGx5 KS4gPGJyPgogICAgICAgIDxicj4KICAgICAgICBJJ20gYXNraW5nIGhlcmUgYmVjYXVzZSBnbHVz dGVyZnMgY29tZXMgZnJvbSB0aGUgb3ZpcnQzLjQgcnBtCiAgICAgICAgcmVwb3NpdG9yeS4gPGJy PgogICAgICAgIDxicj4KICAgICAgICBJZiBhbnlvbmUgaGF2ZSBhIGhpbnQgdG8gdGhpcyBwcm9i bGVtIDxicj4KICAgICAgICA8YnI+CiAgICAgICAgdGhhbmtzIDxicj4KICAgICAgICBKZWFuLU1p Y2hlbCA8YnI+CiAgICAgICAgPGJyPgogICAgICA8L2Rpdj4KICAgICAgPGJyPgogICAgICA8Zmll bGRzZXQgY2xhc3M9Im1pbWVBdHRhY2htZW50SGVhZGVyIj48L2ZpZWxkc2V0PgogICAgICA8YnI+ CiAgICAgIDxwcmUgd3JhcD0iIj5fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fXwpVc2VycyBtYWlsaW5nIGxpc3QKPGEgY2xhc3M9Im1vei10eHQtbGluay1hYmJy ZXZpYXRlZCIgaHJlZj0ibWFpbHRvOlVzZXJzQG92aXJ0Lm9yZyI+VXNlcnNAb3ZpcnQub3JnPC9h Pgo8YSBjbGFzcz0ibW96LXR4dC1saW5rLWZyZWV0ZXh0IiBocmVmPSJodHRwOi8vbGlzdHMub3Zp cnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMiPmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFp bG1hbi9saXN0aW5mby91c2VyczwvYT4KPC9wcmU+CiAgICA8L2Jsb2NrcXVvdGU+CiAgICA8YnI+ CiAgPC9ib2R5Pgo8L2h0bWw+CgotLS0tLS0tLS0tLS0tLTA5MDcwNzAxMDkwMzA5MDYwNTA5MDAw OC0tCg== --===============2504470031150465987==-- From jmfrancois at anaxys.com Fri Sep 25 11:16:50 2015 Content-Type: multipart/mixed; boundary="===============8312076537806582047==" MIME-Version: 1.0 From: Jean-Michel FRANCOIS To: users at ovirt.org Subject: Re: [ovirt-users] Cannot mount gluster storage data Date: Fri, 25 Sep 2015 17:16:45 +0200 Message-ID: <5605655D.6090603@anaxys.com> In-Reply-To: 5604FE01.7020309@redhat.com --===============8312076537806582047== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------040909070901020105030108 Content-Type: text/plain; charset=3Dwindows-1252; format=3Dflowed Content-Transfer-Encoding: 7bit Hi Ravi, Thanks for looking at my problem. The two host are Centos6.6 the first one has been installed one year ago = and the second one this week. Jean-Michel > > On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote: >> Hi Ovirt users, >> >> I'm running ovirt hosted 3.4 with gluster data storage. >> When I add a new host (Centos 6.6) the data storage (as a glsuterfs) = >> cannot be mount. >> I have the following errors in gluster client log file : >> [2015-09-24 12:27:22.636221] I [MSGID: 101190] = >> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started = >> thread with index 1 >> [2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] = >> 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available) >> [2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] = >> (--> = >> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b] = >> (--> = >> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] = >> (--> = >> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] = >> (--> = >> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3b= b] = >> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] = >> ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) = >> op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1) >> [2015-09-24 12:27:22.637333] E = >> [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch = >> volume file (key:/data) >> [2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] = >> (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) = >> [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) = >> [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) = >> 0-: received signum (0), shutting down >> [2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: = >> Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'. >> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] = >> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] = >> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] = >> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: = >> received signum (15), shutting down >> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] = >> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] = >> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] = >> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: = >> received signum (15), shutting down >> And nothing server side. >> > > This does look like an op-version issue. Adding Atin for any possible = > help. > -Ravi > >> I suppose it is a version issue since on server side I have >> glusterfs-api-3.6.3-1.el6.x86_64 >> glusterfs-fuse-3.6.3-1.el6.x86_64 >> glusterfs-libs-3.6.3-1.el6.x86_64 >> glusterfs-3.6.3-1.el6.x86_64 >> glusterfs-cli-3.6.3-1.el6.x86_64 >> glusterfs-rdma-3.6.3-1.el6.x86_64 >> glusterfs-server-3.6.3-1.el6.x86_64 >> >> and on the new host : >> glusterfs-3.7.4-2.el6.x86_64 >> glusterfs-api-3.7.4-2.el6.x86_64 >> glusterfs-libs-3.7.4-2.el6.x86_64 >> glusterfs-fuse-3.7.4-2.el6.x86_64 >> glusterfs-cli-3.7.4-2.el6.x86_64 >> glusterfs-server-3.7.4-2.el6.x86_64 >> glusterfs-client-xlators-3.7.4-2.el6.x86_64 >> glusterfs-rdma-3.7.4-2.el6.x86_64 >> >> But since it is a production system, i'm not confident about = >> performing gluster server upgrade. >> Mounting a gluster volume as NFS is possible (the engine data storage = >> has been mounted succesfully). >> >> I'm asking here because glusterfs comes from the ovirt3.4 rpm = >> repository. >> >> If anyone have a hint to this problem >> >> thanks >> Jean-Michel >> >> >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > --------------040909070901020105030108 Content-Type: text/html; charset=3Dwindows-1252 Content-Transfer-Encoding: 7bit Hi Ravi,

Thanks for looking at my problem.
The two host are Centos6.6 the first one has been installed one year ago and the second one this week.

Jean-Michel
<= br>
On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote:
Hi Ovirt users,

I'm running ovirt hosted 3.4 with gluster data storage.
When I add a new host (Centos 6.6) the data storage (as a glsuterfs) cannot be mount.
I have the following errors in gluster client log file :
[2015-09-24 12:27:22.636221] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available)
[2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb30= 63b] (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d= 7] (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f42= 7f8fc3bb] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1)
[2015-09-24 12:27:22.637333] E [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/data)
[2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (0), shutting down
[2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'.
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (15), shutting down
[2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (15), shutting down
And nothing server side.


This does look like an op-version issue. Adding Atin for any possible help.
-Ravi

I suppose it is a version issue since on server side I have
glusterfs-api-3.6.3-1.el6.x86_64
glusterfs-fuse-3.6.3-1.el6.x86_64
glusterfs-libs-3.6.3-1.el6.x86_64
glusterfs-3.6.3-1.el6.x86_64
glusterfs-cli-3.6.3-1.el6.x86_64
glusterfs-rdma-3.6.3-1.el6.x86_64
glusterfs-server-3.6.3-1.el6.x86_64

and on the new host :
glusterfs-3.7.4-2.el6.x86_64
glusterfs-api-3.7.4-2.el6.x86_64
glusterfs-libs-3.7.4-2.el6.x86_64
glusterfs-fuse-3.7.4-2.el6.x86_64
glusterfs-cli-3.7.4-2.el6.x86_64
glusterfs-server-3.7.4-2.el6.x86_64
glusterfs-client-xlators-3.7.4-2.el6.x86_64
glusterfs-rdma-3.7.4-2.el6.x86_64

But since it is a production system, i'm not confident about performing gluster server upgrade.
Mounting a gluster volume as NFS is possible (the engine data storage has been mounted succesfully).

I'm asking here because glusterfs comes from the ovirt3.4 rpm repository.

If anyone have a hint to this problem

thanks
Jean-Michel



_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/list=
info/users


--------------040909070901020105030108-- --===============8312076537806582047== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wNDA5MDkwNzA5MDEwMjAxMDUwMzAxMDgKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PXdpbmRvd3MtMTI1MjsgZm9ybWF0PWZsb3dlZApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5n OiA3Yml0CgpIaSBSYXZpLAoKVGhhbmtzIGZvciBsb29raW5nIGF0IG15IHByb2JsZW0uClRoZSB0 d28gaG9zdCBhcmUgQ2VudG9zNi42IHRoZSBmaXJzdCBvbmUgaGFzIGJlZW4gaW5zdGFsbGVkIG9u ZSB5ZWFyIGFnbyAKYW5kIHRoZSBzZWNvbmQgb25lIHRoaXMgd2Vlay4KCkplYW4tTWljaGVsCj4K PiBPbiAwOS8yNS8yMDE1IDEyOjMyIFBNLCBKZWFuLU1pY2hlbCBGUkFOQ09JUyB3cm90ZToKPj4g SGkgT3ZpcnQgdXNlcnMsCj4+Cj4+IEknbSBydW5uaW5nIG92aXJ0IGhvc3RlZCAzLjQgd2l0aCBn bHVzdGVyIGRhdGEgc3RvcmFnZS4KPj4gV2hlbiBJIGFkZCBhIG5ldyBob3N0IChDZW50b3MgNi42 KSB0aGUgZGF0YSBzdG9yYWdlIChhcyBhIGdsc3V0ZXJmcykgCj4+IGNhbm5vdCBiZSBtb3VudC4K Pj4gSSBoYXZlIHRoZSBmb2xsb3dpbmcgZXJyb3JzIGluIGdsdXN0ZXIgY2xpZW50IGxvZyBmaWxl IDoKPj4gWzIwMTUtMDktMjQgMTI6Mjc6MjIuNjM2MjIxXSBJIFtNU0dJRDogMTAxMTkwXSAKPj4g W2V2ZW50LWVwb2xsLmM6NjMyOmV2ZW50X2Rpc3BhdGNoX2Vwb2xsX3dvcmtlcl0gMC1lcG9sbDog U3RhcnRlZCAKPj4gdGhyZWFkIHdpdGggaW5kZXggMQo+PiBbMjAxNS0wOS0yNCAxMjoyNzoyMi42 MzY1ODhdIFcgW3NvY2tldC5jOjU4ODpfX3NvY2tldF9yd3ZdIAo+PiAwLWdsdXN0ZXJmczogcmVh ZHYgb24gMTcyLjE2LjAuNToyNDAwNyBmYWlsZWQgKE5vIGRhdGEgYXZhaWxhYmxlKQo+PiBbMjAx NS0wOS0yNCAxMjoyNzoyMi42MzczMDddIEUgW3JwYy1jbG50LmM6MzYyOnNhdmVkX2ZyYW1lc191 bndpbmRdIAo+PiAoLS0+IAo+PiAvdXNyL2xpYjY0L2xpYmdsdXN0ZXJmcy5zby4wKF9nZl9sb2df Y2FsbGluZ2ZuKzB4MWViKVsweDdmNDI3ZmIzMDYzYl0gCj4+ICgtLT4gCj4+IC91c3IvbGliNjQv bGliZ2ZycGMuc28uMChzYXZlZF9mcmFtZXNfdW53aW5kKzB4MWU3KVsweDdmNDI3ZjhmYzFkN10g Cj4+ICgtLT4gCj4+IC91c3IvbGliNjQvbGliZ2ZycGMuc28uMChzYXZlZF9mcmFtZXNfZGVzdHJv eSsweGUpWzB4N2Y0MjdmOGZjMmVlXSAKPj4gKC0tPiAKPj4gL3Vzci9saWI2NC9saWJnZnJwYy5z by4wKHJwY19jbG50X2Nvbm5lY3Rpb25fY2xlYW51cCsweGFiKVsweDdmNDI3ZjhmYzNiYl0gCj4+ ICgtLT4gL3Vzci9saWI2NC9saWJnZnJwYy5zby4wKHJwY19jbG50X25vdGlmeSsweDFjMilbMHg3 ZjQyN2Y4ZmM5ZjJdIAo+PiApKSkpKSAwLWdsdXN0ZXJmczogZm9yY2VkIHVud2luZGluZyBmcmFt ZSB0eXBlKEdsdXN0ZXJGUyBIYW5kc2hha2UpIAo+PiBvcChHRVRTUEVDKDIpKSBjYWxsZWQgYXQg MjAxNS0wOS0yNCAxMjoyNzoyMi42MzYzNDQgKHhpZD0weDEpCj4+IFsyMDE1LTA5LTI0IDEyOjI3 OjIyLjYzNzMzM10gRSAKPj4gW2dsdXN0ZXJmc2QtbWdtdC5jOjE2MDQ6bWdtdF9nZXRzcGVjX2Ni a10gMC1tZ210OiBmYWlsZWQgdG8gZmV0Y2ggCj4+IHZvbHVtZSBmaWxlIChrZXk6L2RhdGEpCj4+ IFsyMDE1LTA5LTI0IDEyOjI3OjIyLjYzNzM2MF0gVyBbZ2x1c3RlcmZzZC5jOjEyMTk6Y2xlYW51 cF9hbmRfZXhpdF0gCj4+ICgtLT4vdXNyL2xpYjY0L2xpYmdmcnBjLnNvLjAoc2F2ZWRfZnJhbWVz X3Vud2luZCsweDIwZSkgCj4+IFsweDdmNDI3ZjhmYzFmZV0gLS0+L3Vzci9zYmluL2dsdXN0ZXJm cyhtZ210X2dldHNwZWNfY2JrKzB4M2YyKSAKPj4gWzB4NDBkNWQyXSAtLT4vdXNyL3NiaW4vZ2x1 c3RlcmZzKGNsZWFudXBfYW5kX2V4aXQrMHg2NSkgWzB4NDA1OWI1XSApIAo+PiAwLTogcmVjZWl2 ZWQgc2lnbnVtICgwKSwgc2h1dHRpbmcgZG93bgo+PiBbMjAxNS0wOS0yNCAxMjoyNzoyMi42Mzcz NzVdIEkgW2Z1c2UtYnJpZGdlLmM6NTU5NTpmaW5pXSAwLWZ1c2U6IAo+PiBVbm1vdW50aW5nICcv cmhldi9kYXRhLWNlbnRlci9tbnQvZ2x1c3RlclNELzE3Mi4xNi4wLjU6X2RhdGEnLgo+PiBbMjAx NS0wOS0yNCAxMjoyNzoyMi42NDYyNDZdIFcgW2dsdXN0ZXJmc2QuYzoxMjE5OmNsZWFudXBfYW5k X2V4aXRdIAo+PiAoLS0+L2xpYjY0L2xpYnB0aHJlYWQuc28uMCgrMHg3YTUxKSBbMHg3ZjQyN2Vj MThhNTFdIAo+PiAtLT4vdXNyL3NiaW4vZ2x1c3RlcmZzKGdsdXN0ZXJmc19zaWd3YWl0ZXIrMHhj ZCkgWzB4NDA1ZTRkXSAKPj4gLS0+L3Vzci9zYmluL2dsdXN0ZXJmcyhjbGVhbnVwX2FuZF9leGl0 KzB4NjUpIFsweDQwNTliNV0gKSAwLTogCj4+IHJlY2VpdmVkIHNpZ251bSAoMTUpLCBzaHV0dGlu ZyBkb3duCj4+IFsyMDE1LTA5LTI0IDEyOjI3OjIyLjY0NjI0Nl0gVyBbZ2x1c3RlcmZzZC5jOjEy MTk6Y2xlYW51cF9hbmRfZXhpdF0gCj4+ICgtLT4vbGliNjQvbGlicHRocmVhZC5zby4wKCsweDdh NTEpIFsweDdmNDI3ZWMxOGE1MV0gCj4+IC0tPi91c3Ivc2Jpbi9nbHVzdGVyZnMoZ2x1c3RlcmZz X3NpZ3dhaXRlcisweGNkKSBbMHg0MDVlNGRdIAo+PiAtLT4vdXNyL3NiaW4vZ2x1c3RlcmZzKGNs ZWFudXBfYW5kX2V4aXQrMHg2NSkgWzB4NDA1OWI1XSApIDAtOiAKPj4gcmVjZWl2ZWQgc2lnbnVt ICgxNSksIHNodXR0aW5nIGRvd24KPj4gQW5kIG5vdGhpbmcgc2VydmVyIHNpZGUuCj4+Cj4KPiBU aGlzIGRvZXMgbG9vayBsaWtlIGFuIG9wLXZlcnNpb24gaXNzdWUuIEFkZGluZyBBdGluIGZvciBh bnkgcG9zc2libGUgCj4gaGVscC4KPiAtUmF2aQo+Cj4+IEkgc3VwcG9zZSBpdCBpcyBhIHZlcnNp b24gaXNzdWUgc2luY2Ugb24gc2VydmVyIHNpZGUgSSBoYXZlCj4+IGdsdXN0ZXJmcy1hcGktMy42 LjMtMS5lbDYueDg2XzY0Cj4+IGdsdXN0ZXJmcy1mdXNlLTMuNi4zLTEuZWw2Lng4Nl82NAo+PiBn bHVzdGVyZnMtbGlicy0zLjYuMy0xLmVsNi54ODZfNjQKPj4gZ2x1c3RlcmZzLTMuNi4zLTEuZWw2 Lng4Nl82NAo+PiBnbHVzdGVyZnMtY2xpLTMuNi4zLTEuZWw2Lng4Nl82NAo+PiBnbHVzdGVyZnMt cmRtYS0zLjYuMy0xLmVsNi54ODZfNjQKPj4gZ2x1c3RlcmZzLXNlcnZlci0zLjYuMy0xLmVsNi54 ODZfNjQKPj4KPj4gYW5kIG9uIHRoZSBuZXcgaG9zdCA6Cj4+IGdsdXN0ZXJmcy0zLjcuNC0yLmVs Ni54ODZfNjQKPj4gZ2x1c3RlcmZzLWFwaS0zLjcuNC0yLmVsNi54ODZfNjQKPj4gZ2x1c3RlcmZz LWxpYnMtMy43LjQtMi5lbDYueDg2XzY0Cj4+IGdsdXN0ZXJmcy1mdXNlLTMuNy40LTIuZWw2Lng4 Nl82NAo+PiBnbHVzdGVyZnMtY2xpLTMuNy40LTIuZWw2Lng4Nl82NAo+PiBnbHVzdGVyZnMtc2Vy dmVyLTMuNy40LTIuZWw2Lng4Nl82NAo+PiBnbHVzdGVyZnMtY2xpZW50LXhsYXRvcnMtMy43LjQt Mi5lbDYueDg2XzY0Cj4+IGdsdXN0ZXJmcy1yZG1hLTMuNy40LTIuZWw2Lng4Nl82NAo+Pgo+PiBC dXQgc2luY2UgaXQgaXMgYSBwcm9kdWN0aW9uIHN5c3RlbSwgaSdtIG5vdCBjb25maWRlbnQgYWJv dXQgCj4+IHBlcmZvcm1pbmcgZ2x1c3RlciBzZXJ2ZXIgdXBncmFkZS4KPj4gTW91bnRpbmcgYSBn bHVzdGVyIHZvbHVtZSBhcyBORlMgaXMgcG9zc2libGUgKHRoZSBlbmdpbmUgZGF0YSBzdG9yYWdl IAo+PiBoYXMgYmVlbiBtb3VudGVkIHN1Y2Nlc2Z1bGx5KS4KPj4KPj4gSSdtIGFza2luZyBoZXJl IGJlY2F1c2UgZ2x1c3RlcmZzIGNvbWVzIGZyb20gdGhlIG92aXJ0My40IHJwbSAKPj4gcmVwb3Np dG9yeS4KPj4KPj4gSWYgYW55b25lIGhhdmUgYSBoaW50IHRvIHRoaXMgcHJvYmxlbQo+Pgo+PiB0 aGFua3MKPj4gSmVhbi1NaWNoZWwKPj4KPj4KPj4KPj4gX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX18KPj4gVXNlcnMgbWFpbGluZyBsaXN0Cj4+IFVzZXJzQG92 aXJ0Lm9yZwo+PiBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnMK PgoKCi0tLS0tLS0tLS0tLS0tMDQwOTA5MDcwOTAxMDIwMTA1MDMwMTA4CkNvbnRlbnQtVHlwZTog dGV4dC9odG1sOyBjaGFyc2V0PXdpbmRvd3MtMTI1MgpDb250ZW50LVRyYW5zZmVyLUVuY29kaW5n OiA3Yml0Cgo8aHRtbD4KICA8aGVhZD4KICAgIDxtZXRhIGNvbnRlbnQ9InRleHQvaHRtbDsgY2hh cnNldD13aW5kb3dzLTEyNTIiCiAgICAgIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSI+CiAgPC9o ZWFkPgogIDxib2R5IHRleHQ9IiMwMDAwMDAiIGJnY29sb3I9IiNGRkZGRkYiPgogICAgSGkgUmF2 aSw8YnI+CiAgICA8YnI+CiAgICBUaGFua3MgZm9yIGxvb2tpbmcgYXQgbXkgcHJvYmxlbS48YnI+ CiAgICBUaGUgdHdvIGhvc3QgYXJlIENlbnRvczYuNiB0aGUgZmlyc3Qgb25lIGhhcyBiZWVuIGlu c3RhbGxlZCBvbmUgeWVhcgogICAgYWdvIGFuZCB0aGUgc2Vjb25kIG9uZSB0aGlzIHdlZWsuPGJy PgogICAgPGJyPgogICAgSmVhbi1NaWNoZWw8YnI+CiAgICA8YmxvY2txdW90ZSBjaXRlPSJtaWQ6 NTYwNEZFMDEuNzAyMDMwOUByZWRoYXQuY29tIiB0eXBlPSJjaXRlIj4gPGJyPgogICAgICA8ZGl2 IGNsYXNzPSJtb3otY2l0ZS1wcmVmaXgiPk9uIDA5LzI1LzIwMTUgMTI6MzIgUE0sIEplYW4tTWlj aGVsCiAgICAgICAgRlJBTkNPSVMgd3JvdGU6PGJyPgogICAgICA8L2Rpdj4KICAgICAgPGJsb2Nr cXVvdGUgY2l0ZT0ibWlkOjU2MDRGMTg3LjcwMjA4MDhAYW5heHlzLmNvbSIgdHlwZT0iY2l0ZSI+ CiAgICAgICAgPG1ldGEgaHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0L2h0 bWw7CiAgICAgICAgICBjaGFyc2V0PXdpbmRvd3MtMTI1MiI+CiAgICAgICAgPGRpdiBjbGFzcz0i bW96LXRleHQtZmxvd2VkIiBzdHlsZT0iZm9udC1mYW1pbHk6IC1tb3otZml4ZWQ7CiAgICAgICAg ICBmb250LXNpemU6IDEycHg7IiBsYW5nPSJ4LXVuaWNvZGUiPkhpIE92aXJ0IHVzZXJzLCA8YnI+ CiAgICAgICAgICA8YnI+CiAgICAgICAgICBJJ20gcnVubmluZyBvdmlydCBob3N0ZWQgMy40IHdp dGggZ2x1c3RlciBkYXRhIHN0b3JhZ2UuIDxicj4KICAgICAgICAgIFdoZW4gSSBhZGQgYSBuZXcg aG9zdCAoQ2VudG9zIDYuNikgdGhlIGRhdGEgc3RvcmFnZSAoYXMgYQogICAgICAgICAgZ2xzdXRl cmZzKSBjYW5ub3QgYmUgbW91bnQuIDxicj4KICAgICAgICAgIEkgaGF2ZSB0aGUgZm9sbG93aW5n IGVycm9ycyBpbiBnbHVzdGVyIGNsaWVudCBsb2cgZmlsZSA6IDxicj4KICAgICAgICAgIFsyMDE1 LTA5LTI0IDEyOjI3OjIyLjYzNjIyMV0gSSBbTVNHSUQ6IDEwMTE5MF0KICAgICAgICAgIFtldmVu dC1lcG9sbC5jOjYzMjpldmVudF9kaXNwYXRjaF9lcG9sbF93b3JrZXJdIDAtZXBvbGw6CiAgICAg ICAgICBTdGFydGVkIHRocmVhZCB3aXRoIGluZGV4IDEgPGJyPgogICAgICAgICAgWzIwMTUtMDkt MjQgMTI6Mjc6MjIuNjM2NTg4XSBXIFtzb2NrZXQuYzo1ODg6X19zb2NrZXRfcnd2XQogICAgICAg ICAgMC1nbHVzdGVyZnM6IHJlYWR2IG9uIDE3Mi4xNi4wLjU6MjQwMDcgZmFpbGVkIChObyBkYXRh CiAgICAgICAgICBhdmFpbGFibGUpIDxicj4KICAgICAgICAgIFsyMDE1LTA5LTI0IDEyOjI3OjIy LjYzNzMwN10gRQogICAgICAgICAgW3JwYy1jbG50LmM6MzYyOnNhdmVkX2ZyYW1lc191bndpbmRd ICgtLSZndDsKICAgICAgICAgIC91c3IvbGliNjQvbGliZ2x1c3RlcmZzLnNvLjAoX2dmX2xvZ19j YWxsaW5nZm4rMHgxZWIpWzB4N2Y0MjdmYjMwNjNiXQogICAgICAgICAgKC0tJmd0OwogICAgICAg ICAgL3Vzci9saWI2NC9saWJnZnJwYy5zby4wKHNhdmVkX2ZyYW1lc191bndpbmQrMHgxZTcpWzB4 N2Y0MjdmOGZjMWQ3XQogICAgICAgICAgKC0tJmd0OwogICAgICAgICAgL3Vzci9saWI2NC9saWJn ZnJwYy5zby4wKHNhdmVkX2ZyYW1lc19kZXN0cm95KzB4ZSlbMHg3ZjQyN2Y4ZmMyZWVdCiAgICAg ICAgICAoLS0mZ3Q7CiAgICAgICAgICAvdXNyL2xpYjY0L2xpYmdmcnBjLnNvLjAocnBjX2NsbnRf Y29ubmVjdGlvbl9jbGVhbnVwKzB4YWIpWzB4N2Y0MjdmOGZjM2JiXQoKCgogICAgICAgICAgKC0t Jmd0OwogICAgICAgICAgL3Vzci9saWI2NC9saWJnZnJwYy5zby4wKHJwY19jbG50X25vdGlmeSsw eDFjMilbMHg3ZjQyN2Y4ZmM5ZjJdCiAgICAgICAgICApKSkpKSAwLWdsdXN0ZXJmczogZm9yY2Vk IHVud2luZGluZyBmcmFtZSB0eXBlKEdsdXN0ZXJGUwogICAgICAgICAgSGFuZHNoYWtlKSBvcChH RVRTUEVDKDIpKSBjYWxsZWQgYXQgMjAxNS0wOS0yNCAxMjoyNzoyMi42MzYzNDQKICAgICAgICAg ICh4aWQ9MHgxKSA8YnI+CiAgICAgICAgICBbMjAxNS0wOS0yNCAxMjoyNzoyMi42MzczMzNdIEUK ICAgICAgICAgIFtnbHVzdGVyZnNkLW1nbXQuYzoxNjA0Om1nbXRfZ2V0c3BlY19jYmtdIDAtbWdt dDogZmFpbGVkIHRvCiAgICAgICAgICBmZXRjaCB2b2x1bWUgZmlsZSAoa2V5Oi9kYXRhKSA8YnI+ CiAgICAgICAgICBbMjAxNS0wOS0yNCAxMjoyNzoyMi42MzczNjBdIFcKICAgICAgICAgIFtnbHVz dGVyZnNkLmM6MTIxOTpjbGVhbnVwX2FuZF9leGl0XQogICAgICAgICAgKC0tJmd0Oy91c3IvbGli NjQvbGliZ2ZycGMuc28uMChzYXZlZF9mcmFtZXNfdW53aW5kKzB4MjBlKQogICAgICAgICAgWzB4 N2Y0MjdmOGZjMWZlXQogICAgICAgICAgLS0mZ3Q7L3Vzci9zYmluL2dsdXN0ZXJmcyhtZ210X2dl dHNwZWNfY2JrKzB4M2YyKSBbMHg0MGQ1ZDJdCiAgICAgICAgICAtLSZndDsvdXNyL3NiaW4vZ2x1 c3RlcmZzKGNsZWFudXBfYW5kX2V4aXQrMHg2NSkgWzB4NDA1OWI1XSApCiAgICAgICAgICAwLTog cmVjZWl2ZWQgc2lnbnVtICgwKSwgc2h1dHRpbmcgZG93biA8YnI+CiAgICAgICAgICBbMjAxNS0w OS0yNCAxMjoyNzoyMi42MzczNzVdIEkgW2Z1c2UtYnJpZGdlLmM6NTU5NTpmaW5pXQogICAgICAg ICAgMC1mdXNlOiBVbm1vdW50aW5nCiAgICAgICAgICAnL3JoZXYvZGF0YS1jZW50ZXIvbW50L2ds dXN0ZXJTRC8xNzIuMTYuMC41Ol9kYXRhJy4gPGJyPgogICAgICAgICAgWzIwMTUtMDktMjQgMTI6 Mjc6MjIuNjQ2MjQ2XSBXCiAgICAgICAgICBbZ2x1c3RlcmZzZC5jOjEyMTk6Y2xlYW51cF9hbmRf ZXhpdF0KICAgICAgICAgICgtLSZndDsvbGliNjQvbGlicHRocmVhZC5zby4wKCsweDdhNTEpIFsw eDdmNDI3ZWMxOGE1MV0KICAgICAgICAgIC0tJmd0Oy91c3Ivc2Jpbi9nbHVzdGVyZnMoZ2x1c3Rl cmZzX3NpZ3dhaXRlcisweGNkKSBbMHg0MDVlNGRdCiAgICAgICAgICAtLSZndDsvdXNyL3NiaW4v Z2x1c3RlcmZzKGNsZWFudXBfYW5kX2V4aXQrMHg2NSkgWzB4NDA1OWI1XSApCiAgICAgICAgICAw LTogcmVjZWl2ZWQgc2lnbnVtICgxNSksIHNodXR0aW5nIGRvd24gPGJyPgogICAgICAgICAgWzIw MTUtMDktMjQgMTI6Mjc6MjIuNjQ2MjQ2XSBXCiAgICAgICAgICBbZ2x1c3RlcmZzZC5jOjEyMTk6 Y2xlYW51cF9hbmRfZXhpdF0KICAgICAgICAgICgtLSZndDsvbGliNjQvbGlicHRocmVhZC5zby4w KCsweDdhNTEpIFsweDdmNDI3ZWMxOGE1MV0KICAgICAgICAgIC0tJmd0Oy91c3Ivc2Jpbi9nbHVz dGVyZnMoZ2x1c3RlcmZzX3NpZ3dhaXRlcisweGNkKSBbMHg0MDVlNGRdCiAgICAgICAgICAtLSZn dDsvdXNyL3NiaW4vZ2x1c3RlcmZzKGNsZWFudXBfYW5kX2V4aXQrMHg2NSkgWzB4NDA1OWI1XSAp CiAgICAgICAgICAwLTogcmVjZWl2ZWQgc2lnbnVtICgxNSksIHNodXR0aW5nIGRvd24gPGJyPgog ICAgICAgICAgQW5kIG5vdGhpbmcgc2VydmVyIHNpZGUuIDxicj4KICAgICAgICAgIDxicj4KICAg ICAgICA8L2Rpdj4KICAgICAgPC9ibG9ja3F1b3RlPgogICAgICA8YnI+CiAgICAgIFRoaXMgZG9l cyBsb29rIGxpa2UgYW4gb3AtdmVyc2lvbiBpc3N1ZS4gQWRkaW5nIEF0aW4gZm9yIGFueQogICAg ICBwb3NzaWJsZSBoZWxwLjxicj4KICAgICAgLVJhdmk8YnI+CiAgICAgIDxicj4KICAgICAgPGJs b2NrcXVvdGUgY2l0ZT0ibWlkOjU2MDRGMTg3LjcwMjA4MDhAYW5heHlzLmNvbSIgdHlwZT0iY2l0 ZSI+CiAgICAgICAgPGRpdiBjbGFzcz0ibW96LXRleHQtZmxvd2VkIiBzdHlsZT0iZm9udC1mYW1p bHk6IC1tb3otZml4ZWQ7CiAgICAgICAgICBmb250LXNpemU6IDEycHg7IiBsYW5nPSJ4LXVuaWNv ZGUiPiBJIHN1cHBvc2UgaXQgaXMgYSB2ZXJzaW9uCiAgICAgICAgICBpc3N1ZSBzaW5jZSBvbiBz ZXJ2ZXIgc2lkZSBJIGhhdmUgPGJyPgogICAgICAgICAgZ2x1c3RlcmZzLWFwaS0zLjYuMy0xLmVs Ni54ODZfNjQgPGJyPgogICAgICAgICAgZ2x1c3RlcmZzLWZ1c2UtMy42LjMtMS5lbDYueDg2XzY0 IDxicj4KICAgICAgICAgIGdsdXN0ZXJmcy1saWJzLTMuNi4zLTEuZWw2Lng4Nl82NCA8YnI+CiAg ICAgICAgICBnbHVzdGVyZnMtMy42LjMtMS5lbDYueDg2XzY0IDxicj4KICAgICAgICAgIGdsdXN0 ZXJmcy1jbGktMy42LjMtMS5lbDYueDg2XzY0IDxicj4KICAgICAgICAgIGdsdXN0ZXJmcy1yZG1h LTMuNi4zLTEuZWw2Lng4Nl82NCA8YnI+CiAgICAgICAgICBnbHVzdGVyZnMtc2VydmVyLTMuNi4z LTEuZWw2Lng4Nl82NCA8YnI+CiAgICAgICAgICA8YnI+CiAgICAgICAgICBhbmQgb24gdGhlIG5l dyBob3N0IDogPGJyPgogICAgICAgICAgZ2x1c3RlcmZzLTMuNy40LTIuZWw2Lng4Nl82NCA8YnI+ CiAgICAgICAgICBnbHVzdGVyZnMtYXBpLTMuNy40LTIuZWw2Lng4Nl82NCA8YnI+CiAgICAgICAg ICBnbHVzdGVyZnMtbGlicy0zLjcuNC0yLmVsNi54ODZfNjQgPGJyPgogICAgICAgICAgZ2x1c3Rl cmZzLWZ1c2UtMy43LjQtMi5lbDYueDg2XzY0IDxicj4KICAgICAgICAgIGdsdXN0ZXJmcy1jbGkt My43LjQtMi5lbDYueDg2XzY0IDxicj4KICAgICAgICAgIGdsdXN0ZXJmcy1zZXJ2ZXItMy43LjQt Mi5lbDYueDg2XzY0IDxicj4KICAgICAgICAgIGdsdXN0ZXJmcy1jbGllbnQteGxhdG9ycy0zLjcu NC0yLmVsNi54ODZfNjQgPGJyPgogICAgICAgICAgZ2x1c3RlcmZzLXJkbWEtMy43LjQtMi5lbDYu eDg2XzY0IDxicj4KICAgICAgICAgIDxicj4KICAgICAgICAgIEJ1dCBzaW5jZSBpdCBpcyBhIHBy b2R1Y3Rpb24gc3lzdGVtLCBpJ20gbm90IGNvbmZpZGVudCBhYm91dAogICAgICAgICAgcGVyZm9y bWluZyBnbHVzdGVyIHNlcnZlciB1cGdyYWRlLiA8YnI+CiAgICAgICAgICBNb3VudGluZyBhIGds dXN0ZXIgdm9sdW1lIGFzIE5GUyBpcyBwb3NzaWJsZSAodGhlIGVuZ2luZSBkYXRhCiAgICAgICAg ICBzdG9yYWdlIGhhcyBiZWVuIG1vdW50ZWQgc3VjY2VzZnVsbHkpLiA8YnI+CiAgICAgICAgICA8 YnI+CiAgICAgICAgICBJJ20gYXNraW5nIGhlcmUgYmVjYXVzZSBnbHVzdGVyZnMgY29tZXMgZnJv bSB0aGUgb3ZpcnQzLjQgcnBtCiAgICAgICAgICByZXBvc2l0b3J5LiA8YnI+CiAgICAgICAgICA8 YnI+CiAgICAgICAgICBJZiBhbnlvbmUgaGF2ZSBhIGhpbnQgdG8gdGhpcyBwcm9ibGVtIDxicj4K ICAgICAgICAgIDxicj4KICAgICAgICAgIHRoYW5rcyA8YnI+CiAgICAgICAgICBKZWFuLU1pY2hl bCA8YnI+CiAgICAgICAgICA8YnI+CiAgICAgICAgPC9kaXY+CiAgICAgICAgPGJyPgogICAgICAg IDxmaWVsZHNldCBjbGFzcz0ibWltZUF0dGFjaG1lbnRIZWFkZXIiPjwvZmllbGRzZXQ+CiAgICAg ICAgPGJyPgogICAgICAgIDxwcmUgd3JhcD0iIj5fX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fXwpVc2VycyBtYWlsaW5nIGxpc3QKPGEgbW96LWRvLW5vdC1zZW5k PSJ0cnVlIiBjbGFzcz0ibW96LXR4dC1saW5rLWFiYnJldmlhdGVkIiBocmVmPSJtYWlsdG86VXNl cnNAb3ZpcnQub3JnIj5Vc2Vyc0BvdmlydC5vcmc8L2E+CjxhIG1vei1kby1ub3Qtc2VuZD0idHJ1 ZSIgY2xhc3M9Im1vei10eHQtbGluay1mcmVldGV4dCIgaHJlZj0iaHR0cDovL2xpc3RzLm92aXJ0 Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzIj5odHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxt YW4vbGlzdGluZm8vdXNlcnM8L2E+CjwvcHJlPgogICAgICA8L2Jsb2NrcXVvdGU+CiAgICAgIDxi cj4KICAgIDwvYmxvY2txdW90ZT4KICAgIDxicj4KICA8L2JvZHk+CjwvaHRtbD4KCi0tLS0tLS0t LS0tLS0tMDQwOTA5MDcwOTAxMDIwMTA1MDMwMTA4LS0K --===============8312076537806582047==-- From amukherj at redhat.com Sun Sep 27 10:26:04 2015 Content-Type: multipart/mixed; boundary="===============7861980955811458423==" MIME-Version: 1.0 From: Atin Mukherjee To: users at ovirt.org Subject: Re: [ovirt-users] Cannot mount gluster storage data Date: Sun, 27 Sep 2015 19:56:02 +0530 Message-ID: <5607FC7A.2010606@redhat.com> In-Reply-To: 5604FE01.7020309@redhat.com --===============7861980955811458423== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 09/25/2015 01:25 PM, Ravishankar N wrote: > = > = > On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote: >> Hi Ovirt users, >> >> I'm running ovirt hosted 3.4 with gluster data storage. >> When I add a new host (Centos 6.6) the data storage (as a glsuterfs) >> cannot be mount. >> I have the following errors in gluster client log file : >> [2015-09-24 12:27:22.636221] I [MSGID: 101190] >> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started >> thread with index 1 >> [2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] >> 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available) >> [2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] >> (--> >> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b] >> (--> >> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] >> (--> >> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] >> (--> >> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3b= b] >> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] >> ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) >> op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1) >> [2015-09-24 12:27:22.637333] E >> [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch >> volume file (key:/data) >> [2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] >> (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) >> [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) >> [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) >> 0-: received signum (0), shutting down >> [2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: >> Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'. >> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] >> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] >> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] >> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: >> received signum (15), shutting down >> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] >> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] >> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] >> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: >> received signum (15), shutting down >> And nothing server side. >> > = > This does look like an op-version issue. Adding Atin for any possible hel= p. Yes this does look an op-version issue. The current version of the client is not supported. What client and server version of gluster are you using? ~Atin > -Ravi > = >> I suppose it is a version issue since on server side I have >> glusterfs-api-3.6.3-1.el6.x86_64 >> glusterfs-fuse-3.6.3-1.el6.x86_64 >> glusterfs-libs-3.6.3-1.el6.x86_64 >> glusterfs-3.6.3-1.el6.x86_64 >> glusterfs-cli-3.6.3-1.el6.x86_64 >> glusterfs-rdma-3.6.3-1.el6.x86_64 >> glusterfs-server-3.6.3-1.el6.x86_64 >> >> and on the new host : >> glusterfs-3.7.4-2.el6.x86_64 >> glusterfs-api-3.7.4-2.el6.x86_64 >> glusterfs-libs-3.7.4-2.el6.x86_64 >> glusterfs-fuse-3.7.4-2.el6.x86_64 >> glusterfs-cli-3.7.4-2.el6.x86_64 >> glusterfs-server-3.7.4-2.el6.x86_64 >> glusterfs-client-xlators-3.7.4-2.el6.x86_64 >> glusterfs-rdma-3.7.4-2.el6.x86_64 >> >> But since it is a production system, i'm not confident about >> performing gluster server upgrade. >> Mounting a gluster volume as NFS is possible (the engine data storage >> has been mounted succesfully). >> >> I'm asking here because glusterfs comes from the ovirt3.4 rpm repository. >> >> If anyone have a hint to this problem >> >> thanks >> Jean-Michel >> >> >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >=20 --===============7861980955811458423==-- From jmfrancois at anaxys.com Sun Sep 27 12:26:41 2015 Content-Type: multipart/mixed; boundary="===============1370380476942539304==" MIME-Version: 1.0 From: Jean-Michel FRANCOIS To: users at ovirt.org Subject: Re: [ovirt-users] Cannot mount gluster storage data Date: Sun, 27 Sep 2015 18:26:38 +0200 Message-ID: <560818BE.9010007@anaxys.com> In-Reply-To: 5607FC7A.2010606@redhat.com --===============1370380476942539304== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Le 27/09/2015 16:26, Atin Mukherjee a =C3=A9crit : > > On 09/25/2015 01:25 PM, Ravishankar N wrote: >> >> On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote: >>> Hi Ovirt users, >>> >>> I'm running ovirt hosted 3.4 with gluster data storage. >>> When I add a new host (Centos 6.6) the data storage (as a glsuterfs) >>> cannot be mount. >>> I have the following errors in gluster client log file : >>> [2015-09-24 12:27:22.636221] I [MSGID: 101190] >>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started >>> thread with index 1 >>> [2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] >>> 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available) >>> [2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] >>> (--> >>> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b] >>> (--> >>> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] >>> (--> >>> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] >>> (--> >>> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc3= bb] >>> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] >>> ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) >>> op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1) >>> [2015-09-24 12:27:22.637333] E >>> [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch >>> volume file (key:/data) >>> [2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] >>> (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) >>> [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) >>> [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) >>> 0-: received signum (0), shutting down >>> [2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: >>> Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'. >>> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] >>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] >>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] >>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: >>> received signum (15), shutting down >>> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] >>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] >>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] >>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: >>> received signum (15), shutting down >>> And nothing server side. >>> >> This does look like an op-version issue. Adding Atin for any possible he= lp. > Yes this does look an op-version issue. The current version of the > client is not supported. What client and server version of gluster are > you using? > > ~Atin Hi Atin, the server has version 3.6.3 and client 3.7.4. Both were provided by ovirt-3.4-glusterfs-epel rpm repository, but not = at the same date :-) >> -Ravi >> >>> I suppose it is a version issue since on server side I have >>> glusterfs-api-3.6.3-1.el6.x86_64 >>> glusterfs-fuse-3.6.3-1.el6.x86_64 >>> glusterfs-libs-3.6.3-1.el6.x86_64 >>> glusterfs-3.6.3-1.el6.x86_64 >>> glusterfs-cli-3.6.3-1.el6.x86_64 >>> glusterfs-rdma-3.6.3-1.el6.x86_64 >>> glusterfs-server-3.6.3-1.el6.x86_64 >>> >>> and on the new host : >>> glusterfs-3.7.4-2.el6.x86_64 >>> glusterfs-api-3.7.4-2.el6.x86_64 >>> glusterfs-libs-3.7.4-2.el6.x86_64 >>> glusterfs-fuse-3.7.4-2.el6.x86_64 >>> glusterfs-cli-3.7.4-2.el6.x86_64 >>> glusterfs-server-3.7.4-2.el6.x86_64 >>> glusterfs-client-xlators-3.7.4-2.el6.x86_64 >>> glusterfs-rdma-3.7.4-2.el6.x86_64 >>> >>> But since it is a production system, i'm not confident about >>> performing gluster server upgrade. >>> Mounting a gluster volume as NFS is possible (the engine data storage >>> has been mounted succesfully). >>> >>> I'm asking here because glusterfs comes from the ovirt3.4 rpm repositor= y. >>> >>> If anyone have a hint to this problem >>> >>> thanks >>> Jean-Michel >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users --===============1370380476942539304==-- From jmfrancois at anaxys.com Tue Sep 29 03:44:43 2015 Content-Type: multipart/mixed; boundary="===============0411661804189684738==" MIME-Version: 1.0 From: Jean-Michel FRANCOIS To: users at ovirt.org Subject: Re: [ovirt-users] Cannot mount gluster storage data Date: Tue, 29 Sep 2015 09:44:40 +0200 Message-ID: <560A4168.1050900@anaxys.com> In-Reply-To: 560818BE.9010007@anaxys.com --===============0411661804189684738== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable No one has an idea how to make a 3.7.4 glusterfs client connect a 3.6.2 = server ? It is a bit strange to get this version incompatibility in the ovirt rpm = repository. I found 3.6.2 rpms in another reporsitory, do you think I could try to = install this version on current ovirt 3.4 installation ? - Jean-Michel Le 27/09/2015 18:26, Jean-Michel FRANCOIS a =C3=A9crit : > > Le 27/09/2015 16:26, Atin Mukherjee a =C3=A9crit : >> >> On 09/25/2015 01:25 PM, Ravishankar N wrote: >>> >>> On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote: >>>> Hi Ovirt users, >>>> >>>> I'm running ovirt hosted 3.4 with gluster data storage. >>>> When I add a new host (Centos 6.6) the data storage (as a glsuterfs) >>>> cannot be mount. >>>> I have the following errors in gluster client log file : >>>> [2015-09-24 12:27:22.636221] I [MSGID: 101190] >>>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started >>>> thread with index 1 >>>> [2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] >>>> 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available) >>>> [2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] >>>> (--> >>>> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b] >>>> (--> >>>> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] >>>> (--> >>>> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] >>>> (--> >>>> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8fc= 3bb] = >>>> >>>> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] >>>> ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) >>>> op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1) >>>> [2015-09-24 12:27:22.637333] E >>>> [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch >>>> volume file (key:/data) >>>> [2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] >>>> (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) >>>> [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) >>>> [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) >>>> 0-: received signum (0), shutting down >>>> [2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: >>>> Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'. >>>> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] >>>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] >>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] >>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: >>>> received signum (15), shutting down >>>> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] >>>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] >>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] >>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: >>>> received signum (15), shutting down >>>> And nothing server side. >>>> >>> This does look like an op-version issue. Adding Atin for any = >>> possible help. >> Yes this does look an op-version issue. The current version of the >> client is not supported. What client and server version of gluster are >> you using? >> >> ~Atin > Hi Atin, > the server has version 3.6.3 and client 3.7.4. > Both were provided by ovirt-3.4-glusterfs-epel rpm repository, but not = > at the same date :-) >>> -Ravi >>> >>>> I suppose it is a version issue since on server side I have >>>> glusterfs-api-3.6.3-1.el6.x86_64 >>>> glusterfs-fuse-3.6.3-1.el6.x86_64 >>>> glusterfs-libs-3.6.3-1.el6.x86_64 >>>> glusterfs-3.6.3-1.el6.x86_64 >>>> glusterfs-cli-3.6.3-1.el6.x86_64 >>>> glusterfs-rdma-3.6.3-1.el6.x86_64 >>>> glusterfs-server-3.6.3-1.el6.x86_64 >>>> >>>> and on the new host : >>>> glusterfs-3.7.4-2.el6.x86_64 >>>> glusterfs-api-3.7.4-2.el6.x86_64 >>>> glusterfs-libs-3.7.4-2.el6.x86_64 >>>> glusterfs-fuse-3.7.4-2.el6.x86_64 >>>> glusterfs-cli-3.7.4-2.el6.x86_64 >>>> glusterfs-server-3.7.4-2.el6.x86_64 >>>> glusterfs-client-xlators-3.7.4-2.el6.x86_64 >>>> glusterfs-rdma-3.7.4-2.el6.x86_64 >>>> >>>> But since it is a production system, i'm not confident about >>>> performing gluster server upgrade. >>>> Mounting a gluster volume as NFS is possible (the engine data storage >>>> has been mounted succesfully). >>>> >>>> I'm asking here because glusterfs comes from the ovirt3.4 rpm = >>>> repository. >>>> >>>> If anyone have a hint to this problem >>>> >>>> thanks >>>> Jean-Michel >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users(a)ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============0411661804189684738==-- From ravishankar at redhat.com Tue Sep 29 04:50:44 2015 Content-Type: multipart/mixed; boundary="===============2048418654937956186==" MIME-Version: 1.0 From: Ravishankar N To: users at ovirt.org Subject: Re: [ovirt-users] Cannot mount gluster storage data Date: Tue, 29 Sep 2015 14:20:38 +0530 Message-ID: <560A50DE.3010200@redhat.com> In-Reply-To: 560A4168.1050900@anaxys.com --===============2048418654937956186== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 09/29/2015 01:14 PM, Jean-Michel FRANCOIS wrote: > No one has an idea how to make a 3.7.4 glusterfs client connect a = > 3.6.2 server ? Newer clients and older servers are not supported. It is advisable to = have the same version across all machines. If you don't want to upgrade = your production nodes to 3.7, you could perhaps manually install 3.6 = rpms [1] on your new host after uninstalling the existing one.. [1] http://download.gluster.org/pub/gluster/glusterfs/3.6/ -Ravi > It is a bit strange to get this version incompatibility in the ovirt = > rpm repository. > > I found 3.6.2 rpms in another reporsitory, do you think I could try to = > install this version on current ovirt 3.4 installation ? > > - Jean-Michel > > Le 27/09/2015 18:26, Jean-Michel FRANCOIS a =C3=A9crit : >> >> Le 27/09/2015 16:26, Atin Mukherjee a =C3=A9crit : >>> >>> On 09/25/2015 01:25 PM, Ravishankar N wrote: >>>> >>>> On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote: >>>>> Hi Ovirt users, >>>>> >>>>> I'm running ovirt hosted 3.4 with gluster data storage. >>>>> When I add a new host (Centos 6.6) the data storage (as a glsuterfs) >>>>> cannot be mount. >>>>> I have the following errors in gluster client log file : >>>>> [2015-09-24 12:27:22.636221] I [MSGID: 101190] >>>>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started >>>>> thread with index 1 >>>>> [2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] >>>>> 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available) >>>>> [2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] >>>>> (--> >>>>> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b] >>>>> (--> >>>>> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] >>>>> (--> >>>>> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] >>>>> (--> >>>>> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8f= c3bb] = >>>>> >>>>> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] >>>>> ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) >>>>> op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1) >>>>> [2015-09-24 12:27:22.637333] E >>>>> [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch >>>>> volume file (key:/data) >>>>> [2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] >>>>> (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) >>>>> [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) >>>>> [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) >>>>> 0-: received signum (0), shutting down >>>>> [2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: >>>>> Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'. >>>>> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] >>>>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] >>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] >>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: >>>>> received signum (15), shutting down >>>>> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] >>>>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] >>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] >>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: >>>>> received signum (15), shutting down >>>>> And nothing server side. >>>>> >>>> This does look like an op-version issue. Adding Atin for any = >>>> possible help. >>> Yes this does look an op-version issue. The current version of the >>> client is not supported. What client and server version of gluster are >>> you using? >>> >>> ~Atin >> Hi Atin, >> the server has version 3.6.3 and client 3.7.4. >> Both were provided by ovirt-3.4-glusterfs-epel rpm repository, but = >> not at the same date :-) >>>> -Ravi >>>> >>>>> I suppose it is a version issue since on server side I have >>>>> glusterfs-api-3.6.3-1.el6.x86_64 >>>>> glusterfs-fuse-3.6.3-1.el6.x86_64 >>>>> glusterfs-libs-3.6.3-1.el6.x86_64 >>>>> glusterfs-3.6.3-1.el6.x86_64 >>>>> glusterfs-cli-3.6.3-1.el6.x86_64 >>>>> glusterfs-rdma-3.6.3-1.el6.x86_64 >>>>> glusterfs-server-3.6.3-1.el6.x86_64 >>>>> >>>>> and on the new host : >>>>> glusterfs-3.7.4-2.el6.x86_64 >>>>> glusterfs-api-3.7.4-2.el6.x86_64 >>>>> glusterfs-libs-3.7.4-2.el6.x86_64 >>>>> glusterfs-fuse-3.7.4-2.el6.x86_64 >>>>> glusterfs-cli-3.7.4-2.el6.x86_64 >>>>> glusterfs-server-3.7.4-2.el6.x86_64 >>>>> glusterfs-client-xlators-3.7.4-2.el6.x86_64 >>>>> glusterfs-rdma-3.7.4-2.el6.x86_64 >>>>> >>>>> But since it is a production system, i'm not confident about >>>>> performing gluster server upgrade. >>>>> Mounting a gluster volume as NFS is possible (the engine data storage >>>>> has been mounted succesfully). >>>>> >>>>> I'm asking here because glusterfs comes from the ovirt3.4 rpm = >>>>> repository. >>>>> >>>>> If anyone have a hint to this problem >>>>> >>>>> thanks >>>>> Jean-Michel >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users(a)ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============2048418654937956186==-- From jmfrancois at anaxys.com Tue Sep 29 05:48:04 2015 Content-Type: multipart/mixed; boundary="===============8428087233535725896==" MIME-Version: 1.0 From: Jean-Michel FRANCOIS To: users at ovirt.org Subject: Re: [ovirt-users] Cannot mount gluster storage data [SOLVED] Date: Tue, 29 Sep 2015 11:48:00 +0200 Message-ID: <560A5E50.5060902@anaxys.com> In-Reply-To: 560A50DE.3010200@redhat.com --===============8428087233535725896== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Thanks Ravi, I followed your advice and it works. But I had to break rpm dependencies ... For information : current ovirt 3.4 version still support glusterfs 3.6.3 - Jean-Michel Le 29/09/2015 10:50, Ravishankar N a =C3=A9crit : > > > On 09/29/2015 01:14 PM, Jean-Michel FRANCOIS wrote: >> No one has an idea how to make a 3.7.4 glusterfs client connect a = >> 3.6.2 server ? > > > Newer clients and older servers are not supported. It is advisable to = > have the same version across all machines. If you don't want to = > upgrade your production nodes to 3.7, you could perhaps manually = > install 3.6 rpms [1] on your new host after uninstalling the existing = > one.. > > [1] http://download.gluster.org/pub/gluster/glusterfs/3.6/ > > -Ravi > >> It is a bit strange to get this version incompatibility in the ovirt = >> rpm repository. >> >> I found 3.6.2 rpms in another reporsitory, do you think I could try = >> to install this version on current ovirt 3.4 installation ? >> >> - Jean-Michel >> >> Le 27/09/2015 18:26, Jean-Michel FRANCOIS a =C3=A9crit : >>> >>> Le 27/09/2015 16:26, Atin Mukherjee a =C3=A9crit : >>>> >>>> On 09/25/2015 01:25 PM, Ravishankar N wrote: >>>>> >>>>> On 09/25/2015 12:32 PM, Jean-Michel FRANCOIS wrote: >>>>>> Hi Ovirt users, >>>>>> >>>>>> I'm running ovirt hosted 3.4 with gluster data storage. >>>>>> When I add a new host (Centos 6.6) the data storage (as a glsuterfs) >>>>>> cannot be mount. >>>>>> I have the following errors in gluster client log file : >>>>>> [2015-09-24 12:27:22.636221] I [MSGID: 101190] >>>>>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started >>>>>> thread with index 1 >>>>>> [2015-09-24 12:27:22.636588] W [socket.c:588:__socket_rwv] >>>>>> 0-glusterfs: readv on 172.16.0.5:24007 failed (No data available) >>>>>> [2015-09-24 12:27:22.637307] E [rpc-clnt.c:362:saved_frames_unwind] >>>>>> (--> >>>>>> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f427fb3063b= ] = >>>>>> >>>>>> (--> >>>>>> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f427f8fc1d7] >>>>>> (--> >>>>>> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f427f8fc2ee] >>>>>> (--> >>>>>> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f427f8= fc3bb] = >>>>>> >>>>>> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f427f8fc9f2] >>>>>> ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) >>>>>> op(GETSPEC(2)) called at 2015-09-24 12:27:22.636344 (xid=3D0x1) >>>>>> [2015-09-24 12:27:22.637333] E >>>>>> [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch >>>>>> volume file (key:/data) >>>>>> [2015-09-24 12:27:22.637360] W [glusterfsd.c:1219:cleanup_and_exit] >>>>>> (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) >>>>>> [0x7f427f8fc1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) >>>>>> [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) = >>>>>> [0x4059b5] ) >>>>>> 0-: received signum (0), shutting down >>>>>> [2015-09-24 12:27:22.637375] I [fuse-bridge.c:5595:fini] 0-fuse: >>>>>> Unmounting '/rhev/data-center/mnt/glusterSD/172.16.0.5:_data'. >>>>>> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] >>>>>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] >>>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] >>>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: >>>>>> received signum (15), shutting down >>>>>> [2015-09-24 12:27:22.646246] W [glusterfsd.c:1219:cleanup_and_exit] >>>>>> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f427ec18a51] >>>>>> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] >>>>>> -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: >>>>>> received signum (15), shutting down >>>>>> And nothing server side. >>>>>> >>>>> This does look like an op-version issue. Adding Atin for any = >>>>> possible help. >>>> Yes this does look an op-version issue. The current version of the >>>> client is not supported. What client and server version of gluster are >>>> you using? >>>> >>>> ~Atin >>> Hi Atin, >>> the server has version 3.6.3 and client 3.7.4. >>> Both were provided by ovirt-3.4-glusterfs-epel rpm repository, but = >>> not at the same date :-) >>>>> -Ravi >>>>> >>>>>> I suppose it is a version issue since on server side I have >>>>>> glusterfs-api-3.6.3-1.el6.x86_64 >>>>>> glusterfs-fuse-3.6.3-1.el6.x86_64 >>>>>> glusterfs-libs-3.6.3-1.el6.x86_64 >>>>>> glusterfs-3.6.3-1.el6.x86_64 >>>>>> glusterfs-cli-3.6.3-1.el6.x86_64 >>>>>> glusterfs-rdma-3.6.3-1.el6.x86_64 >>>>>> glusterfs-server-3.6.3-1.el6.x86_64 >>>>>> >>>>>> and on the new host : >>>>>> glusterfs-3.7.4-2.el6.x86_64 >>>>>> glusterfs-api-3.7.4-2.el6.x86_64 >>>>>> glusterfs-libs-3.7.4-2.el6.x86_64 >>>>>> glusterfs-fuse-3.7.4-2.el6.x86_64 >>>>>> glusterfs-cli-3.7.4-2.el6.x86_64 >>>>>> glusterfs-server-3.7.4-2.el6.x86_64 >>>>>> glusterfs-client-xlators-3.7.4-2.el6.x86_64 >>>>>> glusterfs-rdma-3.7.4-2.el6.x86_64 >>>>>> >>>>>> But since it is a production system, i'm not confident about >>>>>> performing gluster server upgrade. >>>>>> Mounting a gluster volume as NFS is possible (the engine data = >>>>>> storage >>>>>> has been mounted succesfully). >>>>>> >>>>>> I'm asking here because glusterfs comes from the ovirt3.4 rpm = >>>>>> repository. >>>>>> >>>>>> If anyone have a hint to this problem >>>>>> >>>>>> thanks >>>>>> Jean-Michel >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users(a)ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > --===============8428087233535725896==--