------=_Part_277484_1464578144.1441281940515
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
On the gluster node (server)=20
Is not a replicate solution, only one gluster node=20
# gluster peer status=20
Number of Peers: 0=20
Thanks=20
Jos=C3=A9=20
----- Mensagem original -----
De: "Ramesh Nachimuthu" <rnachimu(a)redhat.com>=20
Para: suporte(a)logicworks.pt, Users(a)ovirt.org=20
Enviadas: Quinta-feira, 3 De Setembro de 2015 12:55:31=20
Assunto: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server=20
Can u post the output of 'gluster peer status' on the gluster node?=20
Regards,=20
Ramesh=20
On 09/03/2015 05:10 PM, suporte(a)logicworks.pt wrote:=20
Hi,=20
I just installed Version 3.5.3.1-1.el7.centos , on centos 7.1, no HE.=20
for storage, I have only one server with glusterfs:=20
glusterfs-fuse-3.7.3-1.el7.x86_64=20
glusterfs-server-3.7.3-1.el7.x86_64=20
glusterfs-libs-3.7.3-1.el7.x86_64=20
glusterfs-client-xlators-3.7.3-1.el7.x86_64=20
glusterfs-api-3.7.3-1.el7.x86_64=20
glusterfs-3.7.3-1.el7.x86_64=20
glusterfs-cli-3.7.3-1.el7.x86_64=20
# service glusterd status=20
Redirecting to /bin/systemctl status glusterd.service=20
glusterd.service - GlusterFS, a clustered file-system server=20
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20
Active: active (running) since Thu 2015-09-03 11 :23:32 WEST; 10min ago=20
Process: 1153 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=
=3Dexited, status=3D0/SUCCESS)=20
Main PID: 1387 (glusterd)=20
CGroup: /system.slice/glusterd.service=20
=C3=A2=C3=A21387 /usr/sbin/glusterd -p /var/run/glusterd.pid=20
=C3=A2=C3=A22314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id gv0.gf=
s...=20
Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered =
f....=20
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered f=
i....=20
Hint: Some lines were ellipsized, use -l to show in full.=20
Everything was running until I need to restart the node (host), after that =
I was not ables to make the host active. This is the error message:=20
Gluster command [<UNKNOWN>] failed on server=20
I also disable JSON protocol, but no success=20
vdsm.log:=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,131::BindingXMLRPC::1133::vds::(wra=
pper) client [ 192.168.6.200 ]::call getHardwareInfo with () {}=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,132::BindingXMLRPC::1140::vds::(wra=
pper) return getHardwareInfo with {'status': {'message': 'Done',
'code': 0}=
, 'info': {'systemProductName': 'PRIMERGY RX2520 M1',
'systemSerialNumber':=
'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion':
'GS01', 'systemUU=
ID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU=
'}}=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,266::BindingXMLRPC::1133::vds::(wra=
pper) client [ 192.168.6.200 ]::call hostsList with () {} flowID [4acc5233]=
=20
Thread-14::ERROR:: 2015-09-03 11 :37:23,279::BindingXMLRPC::1149::vds::(wra=
pper) vdsm exception occured=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper=20
res =3D f(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper=20
rv =3D func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList=20
return {'hosts': self.svdsmProxy.glusterPeerStatus()}=20
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__=20
return callMethod()=20
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>=20
**kwargs)=20
File "<string>", line 2, in glusterPeerStatus=20
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _call=
method=20
raise convert_to_error(kind, result)=20
GlusterCmdExecFailedException: Command execution failed=20
error: Connection failed. Please check if gluster daemon is operational.=20
return code: 1=20
supervdsm.log:=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,131::supervdsmServer::1=
02::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,132::supervdsmServer::1=
09::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'syste=
mProductName': 'PRIMERGY RX2520 M1', 'systemSerialNumber':
'YLSK005705', 's=
ystemFamily': 'SERVER', 'systemVersion': 'GS01',
'systemUUID': '4600EA20-2B=
FF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,266::supervdsmServer::1=
02::SuperVdsm.ServerCallback::(wrapper) call wrapper with () {}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,267::utils::739::root::=
(execCmd) /usr/sbin/gluster --mode=3Dscript peer status --xml (cwd None)=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,278::utils::759::root::=
(execCmd) FAILED: <err> =3D ''; <rc> =3D 1=20
MainProcess|Thread-14::ERROR:: 2015-09-03 11 :37:23,279::supervdsmServer::1=
06::SuperVdsm.ServerCallback::(wrapper) Error in wrapper=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper=20
res =3D func(*args, **kwargs)=20
File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper=20
return func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper=20
return func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus=20
xmltree =3D _execGlusterXml(command)=20
File "/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml=20
raise ge.GlusterCmdExecFailedException(rc, out, err)=20
GlusterCmdExecFailedException: Command execution failed=20
error: Connection failed. Please check if gluster daemon is operational.=20
return code: 1=20
Any idea?=20
Thanks=20
Jos=C3=A9=20
--=20
Jose Ferradeira=20
http://www.logicworks.pt=20
_______________________________________________
Users mailing list Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/=
users=20
------=_Part_277484_1464578144.1441281940515
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Times New Roman; font-size:
10pt; co=
lor: #000000"><div>On the gluster node
(server)<br></div><div>Is not a repl=
icate solution, only one gluster
node<br></div><div><br></div><div># gluste=
r peer status<br>Number of Peers:
0<br><br></div><div>Thanks<br></div><div>=
<br></div><div>Jos=C3=A9<br></div><div><br></div><hr
id=3D"zwchr"><div styl=
e=3D"color:#000;font-weight:normal;font-style:normal;text-decoration:none;f=
ont-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style=3D"co=
lor: #000; font-weight: normal; font-style: normal; text-decoration: none; =
font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>De:
</b>"Rame=
sh Nachimuthu" &lt;rnachimu(a)redhat.com&gt;<br><b>Para:
</b>suporte@logicwor=
ks.pt, Users(a)ovirt.org<br><b>Enviadas: </b>Quinta-feira, 3 De Setembro
de 2=
015 12:55:31<br><b>Assunto: </b>Re: [ovirt-users] Gluster command
[<UNKN=
OWN>] failed on server<br><div><br></div>Can u post the
output of 'glust=
er peer status' on the gluster node?<br> <br> Regards,<br>
Ramesh<br> <br><=
div class=3D"moz-cite-prefix">On 09/03/2015 05:10 PM, <a
class=3D"moz-txt-l=
ink-abbreviated" href=3D"mailto:suporte@logicworks.pt"
target=3D"_blank" da=
ta-mce-href=3D"mailto:suporte@logicworks.pt">suporte@logicworks.pt</a>
wrot=
e:<br></div><blockquote
cite=3D"mid:323484460.276954.1441280421052.JavaMail=
.zimbra(a)logicworks.pt"><div style=3D"font-family: Times New Roman;
font-siz=
e: 10pt; color:
#000000" data-mce-style=3D"font-family: Times New Roman; font-size:=
10pt; color:
#000000;"><div><div>Hi,</div><div><br></div><div>I
just insta=
lled <span class=3D"version-text">Version
3.5.3.1-1.el7.centos</span>, on c=
entos 7.1, no HE.</div><div><br></div><div>for storage, I
have only one ser=
ver with glusterfs:</div><div>glusterfs-fuse-3.7.3-1.el7.x86_64<br>
gluster=
fs-server-3.7.3-1.el7.x86_64<br> glusterfs-libs-3.7.3-1.el7.x86_64<br> glus=
terfs-client-xlators-3.7.3-1.el7.x86_64<br> glusterfs-api-3.7.3-1.el7.x86_6=
4<br> glusterfs-3.7.3-1.el7.x86_64<br>
glusterfs-cli-3.7.3-1.el7.x86_64<br>=
<br></div><div># service glusterd status<br> Redirecting to
/bin/systemctl=
status glusterd.service<br> glusterd.service - GlusterFS, a clustere=
d file-system server<br> Loaded: loaded
(/usr/lib/systemd/syst=
em/glusterd.service; enabled)<br> Active: active (running)
sin=
ce Thu <span class=3D"Object"
id=3D"OBJ_PREFIX_DWT374_com_zimbra_phone"><a =
href=3D"callto:2015-09-03%2011" target=3D"_blank"
data-mce-href=3D"callto:2=
015-09-03%2011">2015-09-03 11</a></span>:23:32 WEST; 10min
ago<br> P=
rocess: 1153 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=
=3Dexited, status=3D0/SUCCESS)<br> Main PID: 1387 (glusterd)<br>
&nbs=
p; CGroup: /system.slice/glusterd.service<br>
 =
; =C3=A2=C3=A21387
/usr/sbin/glusterd -=
p /var/run/glusterd.pid<br>
 =
; =C3=A2=C3=A22314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --vol=
file-id gv0.gfs...<br> <br> Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Star=
ting GlusterFS, a clustered f....<br> Sep 03 11:23:32 gfs3.domain.pt system=
d[1]: Started GlusterFS, a clustered fi....<br> Hint: Some lines were ellip=
sized, use -l to show in full.<br>
<br></div><div><br></div><div>Everything=
was running until I need to restart the node (host), after that I was not =
ables to make the host active. This is the error message:</div><div>Gluster=
command [<UNKNOWN>] failed on
server</div><div><br></div><div><br></=
div><div>I also disable JSON protocol, but no
success</div><div><br></div><=
div>vdsm.log:</div><div>Thread-14::DEBUG::<span
class=3D"Object" id=3D"OBJ_=
PREFIX_DWT375_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
target=
=3D"_blank" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03
11</a></spa=
n>:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [<span class=3D"Ob=
ject" id=3D"OBJ_PREFIX_DWT376_com_zimbra_phone"><a
href=3D"callto:192.168.6=
.200" target=3D"_blank"
data-mce-href=3D"callto:192.168.6.200">192.168.6.20=
0</a></span>]::call getHardwareInfo with () {}<br>
Thread-14::DEBUG::<span =
class=3D"Object" id=3D"OBJ_PREFIX_DWT377_com_zimbra_phone"><a
href=3D"callt=
o:2015-09-03%2011" target=3D"_blank"
data-mce-href=3D"callto:2015-09-03%201=
1">2015-09-03
11</a></span>:37:23,132::BindingXMLRPC::1140::vds::(wrapper) =
return getHardwareInfo with {'status': {'message': 'Done',
'code': 0}, 'inf=
o': {'systemProductName': 'PRIMERGY RX2520 M1',
'systemSerialNumber': 'YLSK=
005705', 'systemFamily': 'SERVER', 'systemVersion':
'GS01', 'systemUUID': '=
4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU'}}<br=
Thread-14::DEBUG::<span class=3D"Object"
id=3D"OBJ_PREFIX_DWT378_com_zimb=
ra_phone"><a
href=3D"callto:2015-09-03%2011" target=3D"_blank" data-mce-hre=
f=3D"callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,266::BindingXML=
RPC::1133::vds::(wrapper) client [<span class=3D"Object"
id=3D"OBJ_PREFIX_D=
WT379_com_zimbra_phone"><a href=3D"callto:192.168.6.200"
target=3D"_blank" =
data-mce-href=3D"callto:192.168.6.200">192.168.6.200</a></span>]::call
host=
sList with () {} flowID [4acc5233]<br> Thread-14::ERROR::<span
class=3D"Obj=
ect" id=3D"OBJ_PREFIX_DWT380_com_zimbra_phone"><a
href=3D"callto:2015-09-03=
%2011" target=3D"_blank"
data-mce-href=3D"callto:2015-09-03%2011">2015-09-0=
3 11</a></span>:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm excepti=
on occured<br> Traceback (most recent call last):<br> File
"/usr/sha=
re/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper<br>
=
res =3D f(*args, **kwargs)<br> File
"/usr/share/vdsm/gluster/api.py=
", line 54, in wrapper<br> rv =3D func(*args,
**kwargs)<=
br> File "/usr/share/vdsm/gluster/api.py", line 251, in
hostsList<br=
return {'hosts':
self.svdsmProxy.glusterPeerStatus()}<=
br> File
"/usr/share/vdsm/supervdsm.py", line 50, in __call__<br> &n=
bsp; return callMethod()<br> File
"/usr/share/vdsm/super=
vdsm.py", line 48, in <lambda><br>
**kwargs)<br> &=
nbsp; File "<string>", line 2, in glusterPeerStatus<br>
File "=
/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod=
<br> raise convert_to_error(kind, result)<br>
GlusterCmd=
ExecFailedException: Command execution failed<br> error: Connection failed.=
Please check if gluster daemon is operational.<br> return code: 1<br>
<br>=
</div><div><br></div><div>supervdsm.log:</div><div>MainProcess|Thread-14::D=
EBUG::<span class=3D"Object"
id=3D"OBJ_PREFIX_DWT381_com_zimbra_phone"><a h=
ref=3D"callto:2015-09-03%2011" target=3D"_blank"
data-mce-href=3D"callto:20=
15-09-03%2011">2015-09-03
11</a></span>:37:23,131::supervdsmServer::102::Su=
perVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {}<br> MainP=
rocess|Thread-14::DEBUG::<span class=3D"Object"
id=3D"OBJ_PREFIX_DWT382_com=
_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
target=3D"_blank" data-mc=
e-href=3D"callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,132::super=
vdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo=
with {'systemProductName': 'PRIMERGY RX2520 M1',
'systemSerialNumber': 'YL=
SK005705', 'systemFamily': 'SERVER', 'systemVersion':
'GS01', 'systemUUID':=
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU'}<b=
r> MainProcess|Thread-14::DEBUG::<span class=3D"Object"
id=3D"OBJ_PREFIX_DW=
T383_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
target=3D"_blank"=
data-mce-href=3D"callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,26=
6::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call wrapper w=
ith () {}<br> MainProcess|Thread-14::DEBUG::<span class=3D"Object"
id=3D"OB=
J_PREFIX_DWT384_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
target=
=3D"_blank" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03
11</a></spa=
n>:37:23,267::utils::739::root::(execCmd) /usr/sbin/gluster --mode=3Dscript=
peer status --xml (cwd None)<br> MainProcess|Thread-14::DEBUG::<span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT385_com_zimbra_phone"><a
href=3D"callto:201=
5-09-03%2011" target=3D"_blank"
data-mce-href=3D"callto:2015-09-03%2011">20=
15-09-03 11</a></span>:37:23,278::utils::759::root::(execCmd) FAILED:
<e=
rr> =3D ''; <rc> =3D 1<br>
MainProcess|Thread-14::ERROR::<span cla=
ss=3D"Object" id=3D"OBJ_PREFIX_DWT386_com_zimbra_phone"><a
href=3D"callto:2=
015-09-03%2011" target=3D"_blank"
data-mce-href=3D"callto:2015-09-03%2011">=
2015-09-03 11</a></span>:37:23,279::supervdsmServer::106::SuperVdsm.ServerC=
allback::(wrapper) Error in wrapper<br> Traceback (most recent call last):<=
br> File "/usr/share/vdsm/supervdsmServer", line 104, in
wrapper<br>=
res =3D func(*args, **kwargs)<br> File
"/usr/sha=
re/vdsm/supervdsmServer", line 414, in wrapper<br>
retur=
n func(*args, **kwargs)<br> File
"/usr/share/vdsm/gluster/__init__.p=
y", line 31, in wrapper<br> return func(*args,
**kwargs)=
<br> File "/usr/share/vdsm/gluster/cli.py", line 909, in
peerStatus<=
br> xmltree =3D _execGlusterXml(command)<br>
File=
"/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml<br>
&n=
bsp; raise ge.GlusterCmdExecFailedException(rc, out, err)<br> Gluster=
CmdExecFailedException: Command execution failed<br> error: Connection fail=
ed. Please check if gluster daemon is operational.<br> return code: 1<br>
<=
br></div><div><br></div><div><br></div><div>Any
idea?</div><div><br></div><=
div>Thanks</div><div><br></div>Jos=C3=A9<br><div><br></div></div><div><br><=
/div><div>-- <br></div><div><span></span><hr
style=3D"width: 100%; height: =
2px;" data-mce-style=3D"width: 100%; height: 2px;">Jose
Ferradeira<br> <a c=
lass=3D"moz-txt-link-freetext" href=3D"http://www.logicworks.pt"
target=3D"=
_blank"
data-mce-href=3D"http://www.logicworks.pt">http://www.logicworks.pt=
</a><br>
<span></span><br></div></div><br><fieldset
class=3D"mimeAttachment=
Header"></fieldset><br><pre>_______________________________________________
Users mailing list
<a class=3D"moz-txt-link-abbreviated"
href=3D"mailto:Users@ovirt.org" targe=
t=3D"_blank"
data-mce-href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class=3D"moz-txt-link-freetext"
href=3D"http://lists.ovirt.org/mailman/l=
istinfo/users" target=3D"_blank"
data-mce-href=3D"http://lists.ovirt.org/ma=
ilman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/use...
</pre></blockquote><br></div><div><br></div></div></body></html>
------=_Part_277484_1464578144.1441281940515--