[ovirt-users] method "glusterVolumesList" is not supported (Failed with error VDS_NETWORK_ERROR and code 5022)

Jorick Astrego j.astrego at netbulae.eu
Wed Aug 13 09:15:47 UTC 2014


On 08/13/2014 09:47 AM, Humble Devassy Chirammal wrote:
> I think its worth to check the result of, some other verb to isolate 
> this issue . Can u try below in your host ?
>
> #vdsClient -s localhost glusterHostsList
>
>
>
>
>
>
>
> On Wed, Aug 13, 2014 at 11:45 AM, Sahina Bose <sabose at redhat.com 
> <mailto:sabose at redhat.com>> wrote:
>
>
>     On 08/12/2014 10:59 PM, Jorick Astrego wrote:
>>
>>     On 08/12/2014 05:09 PM, Sahina Bose wrote:
>>>
>>>     On 08/12/2014 08:10 PM, Jorick Astrego wrote:
>>>>
>>>>     On 08/12/2014 03:44 PM, Sahina Bose wrote:
>>>>>     Could you provide the vdsm and supervdsm log from the node?
>>>>>
>>>>     vdsm:
>>>>     http://pastebin.com/edZrvnkr
>>>>     (cut some repetitive loglines out to fit on pastebin)
>>>>
>>>>     Supervdsm:
>>>>
>>>>     http://pastebin.com/V8xTTMKk
>>>
>>>
>>>     I don't see any errors here corresponding to the unsupported
>>>     method "glusterVolumesList"
>>>
>>>     Are these logs from the node "node3.test.nu
>>>     <http://node3.test.nu>"?
>>>
>>>     However, this error is unrelated to adding the storage domain.
>>>     Typically, this unsupported method error is returned from vdsm
>>>     when vdsm-gluster is not installed on the node.
>>>     You could try running the following on node3.test.nu
>>>     <http://node3.test.nu>
>>>     #rge
>>
>>     Found some selinux warnings:
>>
>>         [78080.160692] type=1400 audit(1407844503.916:35): avc: 
>>         denied  { write } for  pid=2286 comm="glusterd"
>>         name="glusterd.socket" dev="tmpfs" ino=857069
>>         scontext=system_u:system_r:glusterd_t:s0
>>         tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
>>         [78219.465352] type=1404 audit(1407844643.107:36):
>>         enforcing=0 old_enforcing=1 auid=0 ses=106
>>         [78222.942816] type=1107 audit(1407844646.582:37): pid=1
>>         uid=0 auid=4294967295 ses=4294967295
>>         subj=system_u:system_r:init_t:s0 msg='avc:  received
>>         setenforce notice (enforcing=0)
>>          exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=?
>>         terminal=?'
>>         [78226.383697] type=1400 audit(1407844650.020:38): avc: 
>>         denied  { write } for  pid=2348 comm="glusterd"
>>         name="glusterd.socket" dev="tmpfs" ino=857069
>>         scontext=system_u:system_r:glusterd_t:s0
>>         tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
>>         [78226.383712] type=1400 audit(1407844650.020:39): avc: 
>>         denied  { unlink } for  pid=2348 comm="glusterd"
>>         name="glusterd.socket" dev="tmpfs" ino=857069
>>         scontext=system_u:system_r:glusterd_t:s0
>>         tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
>>
>
>
>     Gluster needs selinux to be in permissive mode. Could you change
>     this and try?
>

I had to reinstall the host because another bug broke the network settings.

So after an installation with selinux --permissive it works. I did 
switch it while testing before but something must have broken during 
gluster initialization or something like that.

Now it works... Thanks!

    vdsClient -s localhost glusterVolumesList
    {'status': {'code': 0, 'message': 'Done'}, 'volumes': {}}
    Done

>
>>
>>>>>     On 08/12/2014 07:08 PM, Jorick Astrego wrote:
>>>>>>     Hi,
>>>>>>
>>>>>>     No restarting glusterfsd doesn't help. The steps in the link
>>>>>>     you gave me don't help either. The node is clean installed
>>>>>>     and has never been a part of a glusterfs.... Running gluster
>>>>>>     version 3.5.2
>>>>>>
>>>>>>         glusterfs-server-3.5.2-1.el7.x86_64
>>>>>>         glusterfs-api-3.5.2-1.el7.x86_64
>>>>>>         glusterfs-fuse-3.5.2-1.el7.x86_64
>>>>>>         glusterfs-cli-3.5.2-1.el7.x86_64
>>>>>>         vdsm-gluster-4.16.1-4.gitb2bf270.el7.noarch
>>>>>>         glusterfs-libs-3.5.2-1.el7.x86_64
>>>>>>         glusterfs-3.5.2-1.el7.x86_64
>>>>>>         glusterfs-rdma-3.5.2-1.el7.x86_64
>>>>>>
>>>>>>     From the log:
>>>>>>
>>>>>>         [2014-08-12 11:57:30.006092] W
>>>>>>         [glusterfsd.c:1095:cleanup_and_exit]
>>>>>>         (-->/lib64/libc.so.6(clone+0x6d) [0x7fac29bc23dd]
>>>>>>         (-->/lib64/libpthread.so.0(+0x7df3) [0x7fac2a279df3]
>>>>>>         (-->/usr/sbin/glusterd(glusterfs_sigwaiter+0xe5)
>>>>>>         [0x7fac2b8471b5]))) 0-: received signum (15), shutting down
>>>>>>         [2014-08-12 11:57:30.014919] I [glusterfsd.c:1959:main]
>>>>>>         0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd
>>>>>>         version 3.5.2 (/usr/sbin/glusterd -p /run/glusterd.pid)
>>>>>>         [2014-08-12 11:57:30.018089] I [glusterd.c:1122:init]
>>>>>>         0-management: Using /var/lib/glusterd as working directory
>>>>>>         [2014-08-12 11:57:30.020315] I
>>>>>>         [socket.c:3561:socket_init] 0-socket.management: SSL
>>>>>>         support is NOT enabled
>>>>>>         [2014-08-12 11:57:30.020331] I
>>>>>>         [socket.c:3576:socket_init] 0-socket.management: using
>>>>>>         system polling thread
>>>>>>         [2014-08-12 11:57:30.021533] W
>>>>>>         [rdma.c:4194:__gf_rdma_ctx_create] 0-rpc-transport/rdma:
>>>>>>         rdma_cm event channel creation failed (No such device)
>>>>>>         [2014-08-12 11:57:30.021548] E [rdma.c:4482:init]
>>>>>>         0-rdma.management: Failed to initialize IB Device
>>>>>>         [2014-08-12 11:57:30.021555] E
>>>>>>         [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport:
>>>>>>         'rdma' initialization failed
>>>>>>         [2014-08-12 11:57:30.021610] W
>>>>>>         [rpcsvc.c:1535:rpcsvc_transport_create] 0-rpc-service:
>>>>>>         cannot create listener, initing the transport failed
>>>>>>         [2014-08-12 11:57:30.021669] I
>>>>>>         [socket.c:3561:socket_init] 0-socket.management: SSL
>>>>>>         support is NOT enabled
>>>>>>         [2014-08-12 11:57:30.021679] I
>>>>>>         [socket.c:3576:socket_init] 0-socket.management: using
>>>>>>         system polling thread
>>>>>>         [2014-08-12 11:57:30.022603] I
>>>>>>         [glusterd.c:367:glusterd_check_gsync_present] 0-glusterd:
>>>>>>         geo-replication module not installed in the system
>>>>>>         *[2014-08-12 11:57:30.023046] E
>>>>>>         [store.c:408:gf_store_handle_retrieve] 0-: Path
>>>>>>         corresponding to /var/lib/glusterd/glusterd.info
>>>>>>         <http://glusterd.info>, returned error: (No such file or
>>>>>>         directory)**
>>>>>>         **[2014-08-12 11:57:30.023075] E
>>>>>>         [store.c:408:gf_store_handle_retrieve] 0-: Path
>>>>>>         corresponding to /var/lib/glusterd/glusterd.info
>>>>>>         <http://glusterd.info>, returned error: (No such file or
>>>>>>         directory)*
>>>>>>         [2014-08-12 11:57:30.023090] I
>>>>>>         [glusterd-store.c:1441:glusterd_restore_op_version]
>>>>>>         0-management: Detected new install. Setting op-version to
>>>>>>         maximum : 30501
>>>>>>         Final graph:
>>>>>>         +------------------------------------------------------------------------------+
>>>>>>           1: volume management
>>>>>>           2:     type mgmt/glusterd
>>>>>>           3:     option rpc-auth.auth-glusterfs on
>>>>>>           4:     option rpc-auth.auth-unix on
>>>>>>           5:     option rpc-auth.auth-null on
>>>>>>           6:     option transport.socket.listen-backlog 128
>>>>>>           7:     option transport.socket.read-fail-log off
>>>>>>           8:     option transport.socket.keepalive-interval 2
>>>>>>           9:     option transport.socket.keepalive-time 10
>>>>>>          10:     option transport-type rdma
>>>>>>          11:     option working-directory /var/lib/glusterd
>>>>>>          12: end-volume
>>>>>>          13:
>>>>>>         +------------------------------------------------------------------------------+
>>>>>>
>>>>>>     Jorick
>>>>>>
>>>>>>     On 08/12/2014 02:56 PM, Maor Lipchuk wrote:
>>>>>>>     Hi Jorick,
>>>>>>>     which version of GlusterFS are you using?
>>>>>>>
>>>>>>>     Try to restart glusterd on each of the bricks, then try to follow this link:
>>>>>>>     http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
>>>>>>>
>>>>>>>     Please tell me if this helps
>>>>>>>
>>>>>>>     Regards,
>>>>>>>     Maor
>>>>>>>
>>>>>>>     ----- Original Message -----
>>>>>>>     From: "Jorick Astrego"<j.astrego at netbulae.eu>  <mailto:j.astrego at netbulae.eu>
>>>>>>>     To:users at ovirt.org  <mailto:users at ovirt.org>
>>>>>>>     Sent: Tuesday, August 12, 2014 3:16:06 PM
>>>>>>>     Subject: [ovirt-users] method "glusterVolumesList" is not supported (Failed with error VDS_NETWORK_ERROR and code 5022)
>>>>>>>
>>>>>>>     Hi,
>>>>>>>
>>>>>>>     I'm trying to test glusterfs on a couple of Centos 7 ovirt nodes with ovirt 3.5rc1.
>>>>>>>
>>>>>>>     I've enabled glusterfs service for the cluster, created a xfs data partition with mount point, installed "vdsm-gluster" rpm and started glusterfsd. I also cleared the firewall rules.
>>>>>>>
>>>>>>>     When I try to add the storage domain, I get the following error
>>>>>>>
>>>>>>>
>>>>>>>     2014-08-12 14:07:07,346 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-65) START, GlusterVolumesListVDSCommand(HostName =node3.test.nu  <http://node3.test.nu>, HostId = 5bff5a65-6d3c-46b4-aa7c-d87ab25ccb3a), log id: 5d2ee913
>>>>>>>     2014-08-12 14:07:07,350 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-65) Command GlusterVolumesListVDSCommand(HostName =node3.test.nu  <http://node3.test.nu>, HostId = 5bff5a65-6d3c-46b4-aa7c-d87ab25ccb3a) execution failed. Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException: <type 'exceptions.Exception'>:method "glusterVolumesList" is not supported
>>>>>>>     2014-08-12 14:07:07,350 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-65) FINISH, GlusterVolumesListVDSCommand, log id: 5d2ee913
>>>>>>>     2014-08-12 14:07:07,351 ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (DefaultQuartzScheduler_Worker-65) Error while refreshing Gluster lightweight data of cluster Default!: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: org.apache.xmlrpc.XmlRpcException: <type 'exceptions.Exception'>:method "glusterVolumesList" is not supported (Failed with error VDS_NETWORK_ERROR and code 5022)
>>>>>>>     at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:116) [bll.jar:]
>>>>>>>     at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
>>>>>>>     at org.ovirt.engine.core.bll.gluster.GlusterJob.runVdsCommand(GlusterJob.java:65) [bll.jar:]
>>>>>>>     at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.fetchVolumes(GlusterSyncJob.java:406) [bll.jar:]
>>>>>>>     at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.fetchVolumes(GlusterSyncJob.java:392) [bll.jar:]
>>>>>>>     at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshVolumeData(GlusterSyncJob.java:363) [bll.jar:]
>>>>>>>     at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshClusterData(GlusterSyncJob.java:108) [bll.jar:]
>>>>>>>     at org.ovirt.engine.core.bll.gluster.GlusterSyncJob.refreshLightWeightData(GlusterSyncJob.java:87) [bll.jar:]
>>>>>>>     at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source) [:1.7.0_65]
>>>>>>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_65]
>>>>>>>     at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_65]
>>>>>>>     at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [scheduler.jar:]
>>>>>>>     at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz.jar:]
>>>>>>>     at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz.jar:]
>>>>>>>
>>>>>>>     Anything else I need to configure?
>>>>>>>
>>>>>>>     Kind regards,
>>>>>>>     Jorick Astrego
>>>>>>>
>>>>>>>     _______________________________________________
>>>>>>>     Users mailing list
>>>>>>>     Users at ovirt.org  <mailto:Users at ovirt.org>
>>>>>>>     http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>>
>>>>>>     _______________________________________________
>>>>>>     Users mailing list
>>>>>>     Users at ovirt.org  <mailto:Users at ovirt.org>
>>>>>>     http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>
>>>>
>>>>
>>>>     _______________________________________________
>>>>     Users mailing list
>>>>     Users at ovirt.org  <mailto:Users at ovirt.org>
>>>>     http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>>
>>     _______________________________________________
>>     Users mailing list
>>     Users at ovirt.org  <mailto:Users at ovirt.org>
>>     http://lists.ovirt.org/mailman/listinfo/users
>
>
>     _______________________________________________
>     Users mailing list
>     Users at ovirt.org <mailto:Users at ovirt.org>
>     http://lists.ovirt.org/mailman/listinfo/users
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140813/b0ddd3bc/attachment-0001.html>


More information about the Users mailing list