------=_Part_277994_271778102.1441286035547
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
I just update it to Version 3.5.4.2-1.el7.centos=20
but the problem still remains.=20
Any idea?=20
----- Mensagem original -----
De: "Ramesh Nachimuthu" <rnachimu(a)redhat.com>=20
Para: suporte(a)logicworks.pt=20
Cc: Users(a)ovirt.org=20
Enviadas: Quinta-feira, 3 De Setembro de 2015 13:11:52=20
Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20
On 09/03/2015 05:35 PM, suporte(a)logicworks.pt wrote:=20
On the gluster node (server)=20
Is not a replicate solution, only one gluster node=20
# gluster peer status=20
Number of Peers: 0=20
Strange.=20
<blockquote
Thanks=20
Jos=C3=A9=20
----- Mensagem original -----
De: "Ramesh Nachimuthu" <rnachimu(a)redhat.com>=20
Para: suporte(a)logicworks.pt , Users(a)ovirt.org=20
Enviadas: Quinta-feira, 3 De Setembro de 2015 12:55:31=20
Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20
Can u post the output of 'gluster peer status' on the gluster node?=20
Regards,=20
Ramesh=20
On 09/03/2015 05:10 PM, suporte(a)logicworks.pt wrote:=20
<blockquote
Hi,=20
I just installed Version 3.5.3.1-1.el7.centos , on centos 7.1, no HE.=20
for storage, I have only one server with glusterfs:=20
glusterfs-fuse-3.7.3-1.el7.x86_64=20
glusterfs-server-3.7.3-1.el7.x86_64=20
glusterfs-libs-3.7.3-1.el7.x86_64=20
glusterfs-client-xlators-3.7.3-1.el7.x86_64=20
glusterfs-api-3.7.3-1.el7.x86_64=20
glusterfs-3.7.3-1.el7.x86_64=20
glusterfs-cli-3.7.3-1.el7.x86_64=20
# service glusterd status=20
Redirecting to /bin/systemctl status glusterd.service=20
glusterd.service - GlusterFS, a clustered file-system server=20
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20
Active: active (running) since Thu 2015-09-03 11 :23:32 WEST; 10min ago=20
Process: 1153 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=
=3Dexited, status=3D0/SUCCESS)=20
Main PID: 1387 (glusterd)=20
CGroup: /system.slice/glusterd.service=20
=C3=A2=C3=A21387 /usr/sbin/glusterd -p /var/run/glusterd.pid=20
=C3=A2=C3=A22314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id gv0.gf=
s...=20
Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered =
f....=20
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered f=
i....=20
Hint: Some lines were ellipsized, use -l to show in full.=20
Everything was running until I need to restart the node (host), after that =
I was not ables to make the host active. This is the error message:=20
Gluster command [<UNKNOWN>] failed on server=20
I also disable JSON protocol, but no success=20
vdsm.log:=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,131::BindingXMLRPC::1133::vds::(wra=
pper) client [ 192.168.6.200 ]::call getHardwareInfo with () {}=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,132::BindingXMLRPC::1140::vds::(wra=
pper) return getHardwareInfo with {'status': {'message': 'Done',
'code': 0}=
, 'info': {'systemProductName': 'PRIMERGY RX2520 M1',
'systemSerialNumber':=
'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion':
'GS01', 'systemUU=
ID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU=
'}}=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,266::BindingXMLRPC::1133::vds::(wra=
pper) client [ 192.168.6.200 ]::call hostsList with () {} flowID [4acc5233]=
=20
Thread-14::ERROR:: 2015-09-03 11 :37:23,279::BindingXMLRPC::1149::vds::(wra=
pper) vdsm exception occured=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper=20
res =3D f(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper=20
rv =3D func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList=20
return {'hosts': self.svdsmProxy.glusterPeerStatus()}=20
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__=20
return callMethod()=20
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>=20
**kwargs)=20
File "<string>", line 2, in glusterPeerStatus=20
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _call=
method=20
raise convert_to_error(kind, result)=20
GlusterCmdExecFailedException: Command execution failed=20
error: Connection failed. Please check if gluster daemon is operational.=20
return code: 1=20
supervdsm.log:=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,131::supervdsmServer::1=
02::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,132::supervdsmServer::1=
09::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'syste=
mProductName': 'PRIMERGY RX2520 M1', 'systemSerialNumber':
'YLSK005705', 's=
ystemFamily': 'SERVER', 'systemVersion': 'GS01',
'systemUUID': '4600EA20-2B=
FF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,266::supervdsmServer::1=
02::SuperVdsm.ServerCallback::(wrapper) call wrapper with () {}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,267::utils::739::root::=
(execCmd) /usr/sbin/gluster --mode=3Dscript peer status --xml (cwd None)=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,278::utils::759::root::=
(execCmd) FAILED: <err> =3D ''; <rc> =3D 1=20
MainProcess|Thread-14::ERROR:: 2015-09-03 11 :37:23,279::supervdsmServer::1=
06::SuperVdsm.ServerCallback::(wrapper) Error in wrapper=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper=20
res =3D func(*args, **kwargs)=20
File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper=20
return func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper=20
return func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus=20
xmltree =3D _execGlusterXml(command)=20
File "/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml=20
raise ge.GlusterCmdExecFailedException(rc, out, err)=20
GlusterCmdExecFailedException: Command execution failed=20
error: Connection failed. Please check if gluster daemon is operational.=20
return code: 1=20
</blockquote
</blockquote
This error suggests gluster peer status is failing. It could be because of =
selinux. I am just guessing.=20
Can u run "/usr/sbin/gluster --mode=3Dscript peer status --xml" ? also try =
to disable selinux if its active and check.=20
Regards,=20
Ramesh=20
<blockquote
<blockquote
Any idea?=20
Thanks=20
Jos=C3=A9=20
--=20
Jose Ferradeira=20
http://www.logicworks.pt=20
_______________________________________________
Users mailing list Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/=
users=20
</blockquote
</blockquote
------=_Part_277994_271778102.1441286035547
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Times New Roman; font-size:
10pt; co=
lor: #000000"><div>I just update it to <span
class=3D"version-text">Version=
3.5.4.2-1.el7.centos</span><br></div><div>but the problem still
remains.<b=
r></div><div><br></div><div>Any
idea?<br></div><div><br></div><div><br></di=
v><hr id=3D"zwchr"><div
style=3D"color:#000;font-weight:normal;font-style:n=
ormal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size=
:12pt;"><b>De: </b>"Ramesh Nachimuthu"
&lt;rnachimu(a)redhat.com&gt;<br><b>Pa=
ra: </b>suporte(a)logicworks.pt<br><b>Cc:
</b>Users@ovirt.org<br><b>Enviadas:=
</b>Quinta-feira, 3 De Setembro de 2015 13:11:52<br><b>Assunto:
</b>Re: [o=
virt-users] Gluster command [<UNKNOWN>] failed on
server<br><div><br>=
</div
=20
=20
=20
=20
<br
<br
<div class=3D"moz-cite-prefix">On
09/03/2015 05:35 PM,
<a class=3D"moz-txt-link-abbreviated"
href=3D"mailto:suporte@logicwor=
ks.pt" target=3D"_blank">suporte(a)logicworks.pt</a>
wrote:<br
</div
<blockquote
cite=3D"mid:707304294.277485.1441281940521.JavaMail.zimbra@=
logicworks.pt"
<div
style=3D"font-family: Times New Roman; font-size: 10pt; color:
#000000"
<div>On the gluster
node (server)<br
</div
<div>Is not a replicate solution, only one
gluster node<br
</div
<div><br
</div
<div># gluster peer
status<br
Number of Peers:
0<br
<br
</div
</div
</blockquote
<br
Strange. <br
<br
<blockquote
cite=3D"mid:707304294.277485.1441281940521.JavaMail.zimbra@=
logicworks.pt"
<div
style=3D"font-family: Times New Roman; font-size: 10pt; color:
#000000"
<div>Thanks<br
</div
<div><br
</div
<div>Jos=C3=A9<br
</div
<div><br
</div
<hr id=3D"zwchr"
<div
style=3D"color:#000;font-weight:normal;font-style:normal;text-=
decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>=
De: </b>"Ramesh
Nachimuthu" <a class=3D"moz-txt-link-rfc2396E"
href=3D"mailto:rna=
chimu(a)redhat.com"
target=3D"_blank">&lt;rnachimu(a)redhat.com&gt;</a><br
<b>Para: </b><a
class=3D"moz-txt-link-abbreviated" href=3D"mailto=
:suporte@logicworks.pt"
target=3D"_blank">suporte(a)logicworks.pt</a>, <a cla=
ss=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org"
target=3D"_=
blank">Users(a)ovirt.org</a><br
<b>Enviadas: </b>Quinta-feira, 3 De Setembro de 2015 12:55:31<br
<b>Assunto: </b>Re: [ovirt-users] Gluster
command
[<UNKNOWN>] failed on server<br
<div><br
</div
Can u post the output of 'gluster peer
status' on the gluster
node?<br
<br
Regards,<br
Ramesh<br
<br
<div class=3D"moz-cite-prefix">On
09/03/2015 05:10 PM, <a class=
=3D"moz-txt-link-abbreviated" href=3D"mailto:suporte@logicworks.pt"
target=
=3D"_blank"></a><a class=3D"moz-txt-link-abbreviated"
href=3D"mailto:suport=
e(a)logicworks.pt" target=3D"_blank">suporte(a)logicworks.pt</a
wrote:<br
</div
<blockquote
cite=3D"mid:323484460.276954.1441280421052.JavaMail.z=
imbra(a)logicworks.pt"
<div
style=3D"font-family: Times New Roman; font-size: 10pt;
color: #000000"
<div
<div>Hi,</div
<div><br
</div
<div>I just
installed <span class=3D"version-text">Version
3.5.3.1-1.el7.centos</span>, on centos 7.1, no HE.</div=
<div><br
</div
<div>for
storage, I have only one server with glusterfs:</d=
iv
<div>glusterfs-fuse-3.7.3-1.el7.x86_64<br
glusterfs-server-3.7.3-1.el7.x86_64<br
glusterfs-libs-3.7.3-1.el7.x86_64<br
glusterfs-client-xlators-3.7.3-1.el7.x86_64<br
glusterfs-api-3.7.3-1.el7.x86_64<br
glusterfs-3.7.3-1.el7.x86_64<br
glusterfs-cli-3.7.3-1.el7.x86_64<br
<br
</div
<div># service glusterd status<br
Redirecting to /bin/systemctl
status glusterd.servi=
ce<br
glusterd.service -
GlusterFS, a clustered file-system
server<br
Loaded: loaded
(/usr/lib/systemd/system/glusterd.service; enabled)<br
Active: active (running)
since Thu <span cla=
ss=3D"Object" id=3D"OBJ_PREFIX_DWT374_com_zimbra_phone"><a
href=3D"callto:2=
015-09-03%2011" target=3D"_blank">2015-09-03
11</a></span>:23:32 WEST; 10min ago<br
Process: 1153
ExecStart=3D/usr/sbin/glusterd -p
/var/run/glusterd.pid (code=3Dexited, status=3D0/SUCCESS)=
<br
Main PID: 1387 (glusterd)<br
CGroup:
/system.slice/glusterd.service<br
&nb=
sp; =C3=A2=C3=A21387 /usr/sbin/glusterd -p
/var/run/glusterd.pid<br
&nb=
sp; =C3=A2=C3=A22314 /usr/sbin/glusterfsd -s
gfs3.acloud.pt --volfile-id gv0.gfs...<br
<br
Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting
GlusterFS, a clustered f....<br
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started
GlusterFS, a clustered fi....<br
Hint: Some lines were ellipsized, use -l to show in
full.<br
<br
</div
<div><br
</div
<div>Everything was running until I need
to restart the
node (host), after that I was not ables to make the
host active. This is the error message:</div
<div>Gluster command
[<UNKNOWN>] failed on server</di=
v
<div><br
</div
<div><br
</div
<div>I also disable JSON protocol, but no
success</div
<div><br
</div
<div>vdsm.log:</div
<div>Thread-14::DEBUG::<span
class=3D"Object" id=3D"OBJ_PRE=
FIX_DWT375_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
target=3D"_=
blank">2015-09-03
11</a></span>:37:23,131::BindingXMLRPC::1133::vds::(w=
rapper)
client [<span class=3D"Object"
id=3D"OBJ_PREFIX_DWT376_co=
m_zimbra_phone"><a href=3D"callto:192.168.6.200"
target=3D"_blank">192.168.=
6.200</a></span>]::call
getHardwareInfo with () {}<br
Thread-14::DEBUG::<span class=3D"Object" id=3D"OBJ_PREFIX=
_DWT377_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
target=3D"_bla=
nk">2015-09-03
11</a></span>:37:23,132::BindingXMLRPC::1140::vds::(w=
rapper)
return getHardwareInfo with {'status': {'message':
'Done', 'code': 0}, 'info':
{'systemProductName':
'PRIMERGY RX2520 M1', 'systemSerialNumber':
'YLSK005705', 'systemFamily': 'SERVER',
'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE',
'systemManufacturer': 'FUJITSU'}}<br
Thread-14::DEBUG::<span
class=3D"Object" id=3D"OBJ_PREFIX=
_DWT378_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
target=3D"_bla=
nk">2015-09-03
11</a></span>:37:23,266::BindingXMLRPC::1133::vds::(w=
rapper)
client [<span class=3D"Object"
id=3D"OBJ_PREFIX_DWT379_co=
m_zimbra_phone"><a href=3D"callto:192.168.6.200"
target=3D"_blank">192.168.=
6.200</a></span>]::call
hostsList with () {} flowID [4acc5233]<br
Thread-14::ERROR::<span
class=3D"Object" id=3D"OBJ_PREFIX=
_DWT380_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
target=3D"_bla=
nk">2015-09-03
11</a></span>:37:23,279::BindingXMLRPC::1149::vds::(w=
rapper)
vdsm exception occured<br
Traceback (most recent call last):<br
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line
1136, in wrapper<br
res =3D f(*args, **kwargs)<br
File
"/usr/share/vdsm/gluster/api.py", line 54, in
wrapper<br
rv =3D func(*args, **kwargs)<br
File
"/usr/share/vdsm/gluster/api.py", line 251, i=
n
hostsList<br
return {'hosts':
self.svdsmProxy.glusterPeerStatus()}<br
File "/usr/share/vdsm/supervdsm.py", line 50, in
__call__<br
return callMethod()<br
File "/usr/share/vdsm/supervdsm.py", line 48, in
<lambda><br
**kwargs)<br
File "<string>", line 2, in glusterPeerStatu=
s<br
File
"/usr/lib64/python2.7/multiprocessing/managers.py",
line 773, in _callmethod<br
raise convert_to_error(kind, result)<b=
r
GlusterCmdExecFailedException: Command
execution
failed<br
error: Connection
failed. Please check if gluster
daemon is operational.<br
return code: 1<br
<br
</div
<div><br
</div
<div>supervdsm.log:</div
<div>MainProcess|Thread-14::DEBUG::<span class=3D"Object" i=
d=3D"OBJ_PREFIX_DWT381_com_zimbra_phone"><a
href=3D"callto:2015-09-03%2011"=
target=3D"_blank">2015-09-03
11</a></span>:37:23,131::supervdsmServer::102::SuperV=
dsm.ServerCallback::(wrapper)
call getHardwareInfo with () {}<br
MainProcess|Thread-14::DEBUG::<span class=3D"Object" id=
=3D"OBJ_PREFIX_DWT382_com_zimbra_phone"><a
href=3D"callto:2015-09-03%2011" =
target=3D"_blank">2015-09-03
11</a></span>:37:23,132::supervdsmServer::109::SuperV=
dsm.ServerCallback::(wrapper)
return getHardwareInfo with {'systemProductName':
'PRIMERGY RX2520 M1', 'systemSerialNumber':
'YLSK005705', 'systemFamily': 'SERVER',
'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE',
'systemManufacturer': 'FUJITSU'}<br
MainProcess|Thread-14::DEBUG::<span
class=3D"Object" id=
=3D"OBJ_PREFIX_DWT383_com_zimbra_phone"><a
href=3D"callto:2015-09-03%2011" =
target=3D"_blank">2015-09-03
11</a></span>:37:23,266::supervdsmServer::102::SuperV=
dsm.ServerCallback::(wrapper)
call wrapper with () {}<br
MainProcess|Thread-14::DEBUG::<span class=3D"Object" id=
=3D"OBJ_PREFIX_DWT384_com_zimbra_phone"><a
href=3D"callto:2015-09-03%2011" =
target=3D"_blank">2015-09-03
11</a></span>:37:23,267::utils::739::root::(execCmd)
/usr/sbin/gluster --mode=3Dscript peer status --xml (cwd
None)<br
MainProcess|Thread-14::DEBUG::<span class=3D"Object" id=
=3D"OBJ_PREFIX_DWT385_com_zimbra_phone"><a
href=3D"callto:2015-09-03%2011" =
target=3D"_blank">2015-09-03
11</a></span>:37:23,278::utils::759::root::(execCmd)
FAILED: <err> =3D ''; <rc> =3D
1<br
MainProcess|Thread-14::ERROR::<span
class=3D"Object" id=
=3D"OBJ_PREFIX_DWT386_com_zimbra_phone"><a
href=3D"callto:2015-09-03%2011" =
target=3D"_blank">2015-09-03
11</a></span>:37:23,279::supervdsmServer::106::SuperV=
dsm.ServerCallback::(wrapper)
Error in wrapper<br
Traceback (most recent call last):<br
File "/usr/share/vdsm/supervdsmServer", line 104, =
in
wrapper<br
res =3D func(*args, **kwargs)<br
File
"/usr/share/vdsm/supervdsmServer", line 414, =
in
wrapper<br
return func(*args, **kwargs)<br
File
"/usr/share/vdsm/gluster/__init__.py", line 3=
1,
in wrapper<br
return func(*args, **kwargs)<br
File
"/usr/share/vdsm/gluster/cli.py", line 909, i=
n
peerStatus<br
xmltree =3D _execGlusterXml(command)<b=
r
File
"/usr/share/vdsm/gluster/cli.py", line 90, in
_execGlusterXml<br
raise ge.GlusterCmdExecFailedException=
(rc, out,
err)<br
GlusterCmdExecFailedException: Command execution
failed<br
error: Connection
failed. Please check if gluster
daemon is operational.<br
return code: 1<br
<br
</div
<div><br
</div
</div
</div
</blockquote
</div
</div
</blockquote
<br
This
error suggests gluster peer status is failing. It could be
because of selinux. I am just guessing. <br
<br
Can u run <b>"/usr/sbin/gluster --mode=3Dscript
peer status --xml"</b
? also try to disable selinux if
its active and check.<br
<br
Regards,<br
Ramesh<br
<br
<blockquote
cite=3D"mid:707304294.277485.1441281940521.JavaMail.zimbra@=
logicworks.pt"
<div
style=3D"font-family: Times New Roman; font-size: 10pt; color:
#000000"
<div
style=3D"color:#000;font-weight:normal;font-style:normal;text-=
decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"
<blockquote
cite=3D"mid:323484460.276954.1441280421052.JavaMail.z=
imbra(a)logicworks.pt"
<div
style=3D"font-family: Times New Roman; font-size: 10pt;
color: #000000"
<div
<div><br
</div
<div>Any
idea?</div
<div><br
</div
<div>Thanks</div
<div><br
</div
Jos=C3=A9<br
<div><br
</div
</div
<div><br
</div
<div>--
<br
</div
<div><span></span
<hr style=3D"width: 100%; height: 2px;">Jose
Ferradeira<br
<a
class=3D"moz-txt-link-freetext" href=3D"http://www.logic=
works.pt" target=3D"_blank">http://www.logicworks.pt</a><br
<span></span><br
</div
</div
<br
<fieldset
class=3D"mimeAttachmentHeader"></fieldset
<br
<pre>_______________________________________________
Users mailing list
<a class=3D"moz-txt-link-abbreviated"
href=3D"mailto:Users@ovirt.org" targe=
t=3D"_blank">Users(a)ovirt.org</a
<a
class=3D"moz-txt-link-freetext"
href=3D"http://lists.ovirt.org/mailman/l=
istinfo/users"
target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/us=
ers</a
</pre
</blockquote
<br
</div
<div><br
</div
</div
</blockquote
<br
=20
</div><div><br></div></div></body></html
------=_Part_277994_271778102.1441286035547--