[ovirt-users] Gluster command [<UNKNOWN>] failed on server

knarra knarra at redhat.com
Thu Sep 3 16:37:30 UTC 2015


On 09/03/2015 07:15 PM, suporte at logicworks.pt wrote:
> Hi did a reinstall on the Host, and everything comes up again.
> Than I put the Host in maintenance, reboot it, Confirm 'Host has been 
> Rebooted', Activate and the error comes up again: Gluster command 
> [<UNKNOWN>] failed on server
>
> ??
once the reboot happens and host comes up back, can you please check if 
glusterd is running and operational?
>
> ------------------------------------------------------------------------
> *De: *suporte at logicworks.pt
> *Para: *"Ramesh Nachimuthu" <rnachimu at redhat.com>
> *Cc: *Users at ovirt.org
> *Enviadas: *Quinta-feira, 3 De Setembro de 2015 14:13:55
> *Assunto: *Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
>
> I just update it to Version 3.5.4.2-1.el7.centos
> but the problem still remains.
>
> Any idea?
>
>
> ------------------------------------------------------------------------
> *De: *"Ramesh Nachimuthu" <rnachimu at redhat.com>
> *Para: *suporte at logicworks.pt
> *Cc: *Users at ovirt.org
> *Enviadas: *Quinta-feira, 3 De Setembro de 2015 13:11:52
> *Assunto: *Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server
>
>
>
> On 09/03/2015 05:35 PM, suporte at logicworks.pt wrote:
>
>     On the gluster node (server)
>     Is not a replicate solution, only one gluster node
>
>     # gluster peer status
>     Number of Peers: 0
>
>
> Strange.
>
>     Thanks
>
>     José
>
>     ------------------------------------------------------------------------
>     *De: *"Ramesh Nachimuthu" <rnachimu at redhat.com>
>     *Para: *suporte at logicworks.pt, Users at ovirt.org
>     *Enviadas: *Quinta-feira, 3 De Setembro de 2015 12:55:31
>     *Assunto: *Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on
>     server
>
>     Can u post the output of 'gluster peer status' on the gluster node?
>
>     Regards,
>     Ramesh
>
>     On 09/03/2015 05:10 PM, suporte at logicworks.pt wrote:
>
>         Hi,
>
>         I just installed Version 3.5.3.1-1.el7.centos, on centos 7.1,
>         no HE.
>
>         for storage, I have only one server with glusterfs:
>         glusterfs-fuse-3.7.3-1.el7.x86_64
>         glusterfs-server-3.7.3-1.el7.x86_64
>         glusterfs-libs-3.7.3-1.el7.x86_64
>         glusterfs-client-xlators-3.7.3-1.el7.x86_64
>         glusterfs-api-3.7.3-1.el7.x86_64
>         glusterfs-3.7.3-1.el7.x86_64
>         glusterfs-cli-3.7.3-1.el7.x86_64
>
>         # service glusterd status
>         Redirecting to /bin/systemctl status glusterd.service
>         glusterd.service - GlusterFS, a clustered file-system server
>            Loaded: loaded (/usr/lib/systemd/system/glusterd.service;
>         enabled)
>            Active: active (running) since Thu 2015-09-03 11
>         <callto:2015-09-03%2011>:23:32 WEST; 10min ago
>           Process: 1153 ExecStart=/usr/sbin/glusterd -p
>         /var/run/glusterd.pid (code=exited, status=0/SUCCESS)
>          Main PID: 1387 (glusterd)
>            CGroup: /system.slice/glusterd.service
>                    ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid
>                    ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt
>         --volfile-id gv0.gfs...
>
>         Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS,
>         a clustered f....
>         Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS,
>         a clustered fi....
>         Hint: Some lines were ellipsized, use -l to show in full.
>
>
>         Everything was running until I need to restart the node
>         (host), after that I was not ables to make the host active.
>         This is the error message:
>         Gluster command [<UNKNOWN>] failed on server
>
>
>         I also disable JSON protocol, but no success
>
>         vdsm.log:
>         Thread-14::DEBUG::2015-09-03 11
>         <callto:2015-09-03%2011>:37:23,131::BindingXMLRPC::1133::vds::(wrapper)
>         client [192.168.6.200 <callto:192.168.6.200>]::call
>         getHardwareInfo with () {}
>         Thread-14::DEBUG::2015-09-03 11
>         <callto:2015-09-03%2011>:37:23,132::BindingXMLRPC::1140::vds::(wrapper)
>         return getHardwareInfo with {'status': {'message': 'Done',
>         'code': 0}, 'info': {'systemProductName': 'PRIMERGY RX2520
>         M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily':
>         'SERVER', 'systemVersion': 'GS01', 'systemUUID':
>         '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
>         'FUJITSU'}}
>         Thread-14::DEBUG::2015-09-03 11
>         <callto:2015-09-03%2011>:37:23,266::BindingXMLRPC::1133::vds::(wrapper)
>         client [192.168.6.200 <callto:192.168.6.200>]::call hostsList
>         with () {} flowID [4acc5233]
>         Thread-14::ERROR::2015-09-03 11
>         <callto:2015-09-03%2011>:37:23,279::BindingXMLRPC::1149::vds::(wrapper)
>         vdsm exception occured
>         Traceback (most recent call last):
>           File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in
>         wrapper
>             res = f(*args, **kwargs)
>           File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
>             rv = func(*args, **kwargs)
>           File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
>             return {'hosts': self.svdsmProxy.glusterPeerStatus()}
>           File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
>             return callMethod()
>           File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
>             **kwargs)
>           File "<string>", line 2, in glusterPeerStatus
>           File "/usr/lib64/python2.7/multiprocessing/managers.py",
>         line 773, in _callmethod
>             raise convert_to_error(kind, result)
>         GlusterCmdExecFailedException: Command execution failed
>         error: Connection failed. Please check if gluster daemon is
>         operational.
>         return code: 1
>
>
>         supervdsm.log:
>         MainProcess|Thread-14::DEBUG::2015-09-03 11
>         <callto:2015-09-03%2011>:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
>         call getHardwareInfo with () {}
>         MainProcess|Thread-14::DEBUG::2015-09-03 11
>         <callto:2015-09-03%2011>:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
>         return getHardwareInfo with {'systemProductName': 'PRIMERGY
>         RX2520 M1', 'systemSerialNumber': 'YLSK005705',
>         'systemFamily': 'SERVER', 'systemVersion': 'GS01',
>         'systemUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE',
>         'systemManufacturer': 'FUJITSU'}
>         MainProcess|Thread-14::DEBUG::2015-09-03 11
>         <callto:2015-09-03%2011>:37:23,266::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
>         call wrapper with () {}
>         MainProcess|Thread-14::DEBUG::2015-09-03 11
>         <callto:2015-09-03%2011>:37:23,267::utils::739::root::(execCmd) /usr/sbin/gluster
>         --mode=script peer status --xml (cwd None)
>         MainProcess|Thread-14::DEBUG::2015-09-03 11
>         <callto:2015-09-03%2011>:37:23,278::utils::759::root::(execCmd) FAILED:
>         <err> = ''; <rc> = 1
>         MainProcess|Thread-14::ERROR::2015-09-03 11
>         <callto:2015-09-03%2011>:37:23,279::supervdsmServer::106::SuperVdsm.ServerCallback::(wrapper)
>         Error in wrapper
>         Traceback (most recent call last):
>           File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper
>             res = func(*args, **kwargs)
>           File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper
>             return func(*args, **kwargs)
>           File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper
>             return func(*args, **kwargs)
>           File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus
>             xmltree = _execGlusterXml(command)
>           File "/usr/share/vdsm/gluster/cli.py", line 90, in
>         _execGlusterXml
>             raise ge.GlusterCmdExecFailedException(rc, out, err)
>         GlusterCmdExecFailedException: Command execution failed
>         error: Connection failed. Please check if gluster daemon is
>         operational.
>         return code: 1
>
>
>
> This error suggests gluster peer status is failing. It could be 
> because of selinux. I am just guessing.
>
> Can u run *"/usr/sbin/gluster --mode=script peer status --xml"* ? also 
> try to disable selinux if its active and check.
>
> Regards,
> Ramesh
>
>
>         Any idea?
>
>         Thanks
>
>         José
>
>
>         -- 
>         ------------------------------------------------------------------------
>         Jose Ferradeira
>         http://www.logicworks.pt
>
>
>
>         _______________________________________________
>         Users mailing list
>         Users at ovirt.org
>         http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150903/f13ddffc/attachment-0001.html>


More information about the Users mailing list