------=_Part_276953_1869596252.1441280421047
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,=20
I just installed Version 3.5.3.1-1.el7.centos , on centos 7.1, no HE.=20
for storage, I have only one server with glusterfs:=20
glusterfs-fuse-3.7.3-1.el7.x86_64=20
glusterfs-server-3.7.3-1.el7.x86_64=20
glusterfs-libs-3.7.3-1.el7.x86_64=20
glusterfs-client-xlators-3.7.3-1.el7.x86_64=20
glusterfs-api-3.7.3-1.el7.x86_64=20
glusterfs-3.7.3-1.el7.x86_64=20
glusterfs-cli-3.7.3-1.el7.x86_64=20
# service glusterd status=20
Redirecting to /bin/systemctl status glusterd.service=20
glusterd.service - GlusterFS, a clustered file-system server=20
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)=20
Active: active (running) since Thu 2015-09-03 11 :23:32 WEST; 10min ago=20
Process: 1153 ExecStart=3D/usr/sbin/glusterd -p /var/run/glusterd.pid (code=
=3Dexited, status=3D0/SUCCESS)=20
Main PID: 1387 (glusterd)=20
CGroup: /system.slice/glusterd.service=20
=C3=A2=C3=A21387 /usr/sbin/glusterd -p /var/run/glusterd.pid=20
=C3=A2=C3=A22314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id gv0.gf=
s...=20
Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a clustered =
f....=20
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered f=
i....=20
Hint: Some lines were ellipsized, use -l to show in full.=20
Everything was running until I need to restart the node (host), after that =
I was not ables to make the host active. This is the error message:=20
Gluster command [<UNKNOWN>] failed on server=20
I also disable JSON protocol, but no success=20
vdsm.log:=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,131::BindingXMLRPC::1133::vds::(wra=
pper) client [ 192.168.6.200 ]::call getHardwareInfo with () {}=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,132::BindingXMLRPC::1140::vds::(wra=
pper) return getHardwareInfo with {'status': {'message': 'Done',
'code': 0}=
, 'info': {'systemProductName': 'PRIMERGY RX2520 M1',
'systemSerialNumber':=
'YLSK005705', 'systemFamily': 'SERVER', 'systemVersion':
'GS01', 'systemUU=
ID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU=
'}}=20
Thread-14::DEBUG:: 2015-09-03 11 :37:23,266::BindingXMLRPC::1133::vds::(wra=
pper) client [ 192.168.6.200 ]::call hostsList with () {} flowID [4acc5233]=
=20
Thread-14::ERROR:: 2015-09-03 11 :37:23,279::BindingXMLRPC::1149::vds::(wra=
pper) vdsm exception occured=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper=20
res =3D f(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper=20
rv =3D func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList=20
return {'hosts': self.svdsmProxy.glusterPeerStatus()}=20
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__=20
return callMethod()=20
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>=20
**kwargs)=20
File "<string>", line 2, in glusterPeerStatus=20
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _call=
method=20
raise convert_to_error(kind, result)=20
GlusterCmdExecFailedException: Command execution failed=20
error: Connection failed. Please check if gluster daemon is operational.=20
return code: 1=20
supervdsm.log:=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,131::supervdsmServer::1=
02::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,132::supervdsmServer::1=
09::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with {'syste=
mProductName': 'PRIMERGY RX2520 M1', 'systemSerialNumber':
'YLSK005705', 's=
ystemFamily': 'SERVER', 'systemVersion': 'GS01',
'systemUUID': '4600EA20-2B=
FF-B34F-B607-DBF9F6B278CE', 'systemManufacturer': 'FUJITSU'}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,266::supervdsmServer::1=
02::SuperVdsm.ServerCallback::(wrapper) call wrapper with () {}=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,267::utils::739::root::=
(execCmd) /usr/sbin/gluster --mode=3Dscript peer status --xml (cwd None)=20
MainProcess|Thread-14::DEBUG:: 2015-09-03 11 :37:23,278::utils::759::root::=
(execCmd) FAILED: <err> =3D ''; <rc> =3D 1=20
MainProcess|Thread-14::ERROR:: 2015-09-03 11 :37:23,279::supervdsmServer::1=
06::SuperVdsm.ServerCallback::(wrapper) Error in wrapper=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper=20
res =3D func(*args, **kwargs)=20
File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper=20
return func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper=20
return func(*args, **kwargs)=20
File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus=20
xmltree =3D _execGlusterXml(command)=20
File "/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml=20
raise ge.GlusterCmdExecFailedException(rc, out, err)=20
GlusterCmdExecFailedException: Command execution failed=20
error: Connection failed. Please check if gluster daemon is operational.=20
return code: 1=20
Any idea?=20
Thanks=20
Jos=C3=A9=20
--=20
Jose Ferradeira=20
http://www.logicworks.pt=20
------=_Part_276953_1869596252.1441280421047
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: Times New Roman; font-size:
10pt; co=
lor:
#000000"><div><div>Hi,</div><div><br></div><div>I
just installed <span=
class=3D"version-text">Version 3.5.3.1-1.el7.centos</span>, on centos
7.1,=
no HE.</div><div><br></div><div>for storage, I have only
one server with g=
lusterfs:</div><div>glusterfs-fuse-3.7.3-1.el7.x86_64<br>glusterfs-server-3=
.7.3-1.el7.x86_64<br>glusterfs-libs-3.7.3-1.el7.x86_64<br>glusterfs-client-=
xlators-3.7.3-1.el7.x86_64<br>glusterfs-api-3.7.3-1.el7.x86_64<br>glusterfs=
-3.7.3-1.el7.x86_64<br>glusterfs-cli-3.7.3-1.el7.x86_64<br><br></div><div>#=
service glusterd status<br>Redirecting to /bin/systemctl status glus=
terd.service<br>glusterd.service - GlusterFS, a clustered file-system serve=
r<br> Loaded: loaded
(/usr/lib/systemd/system/glusterd.service;=
enabled)<br> Active: active (running) since Thu <span
class=3D=
"Object" id=3D"OBJ_PREFIX_DWT374_com_zimbra_phone"><a
href=3D"callto:2015-0=
9-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03
11</a></span=
:23:32 WEST; 10min ago<br> Process: 1153
ExecStart=3D/usr/sbin/glust=
erd -p /var/run/glusterd.pid (code=3Dexited,
status=3D0/SUCCESS)<br> M=
ain PID: 1387 (glusterd)<br> CGroup:
/system.slice/glusterd.ser=
vice<br>
=C3=A2=
=C3=A21387 /usr/sbin/glusterd -p
/var/run/glusterd.pid<br>  =
; =C3=A2=C3=A22314
/usr/sbin/glus=
terfsd -s gfs3.acloud.pt --volfile-id gv0.gfs...<br><br>Sep 03 11:23:31 gfs=
3.domain.pt systemd[1]: Starting GlusterFS, a clustered f....<br>Sep 03 11:=
23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a clustered fi....<br>H=
int: Some lines were ellipsized, use -l to show in
full.<br><br></div><div>=
<br></div><div>Everything was running until I need to restart the node
(hos=
t), after that I was not ables to make the host active. This is the error m=
essage:</div><div>Gluster command [<UNKNOWN>] failed on
server</div><=
div><br></div><div><br></div><div>I also disable
JSON protocol, but no succ=
ess</div><div><br></div><div>vdsm.log:</div><div>Thread-14::DEBUG::<span
cl=
ass=3D"Object" id=3D"OBJ_PREFIX_DWT375_com_zimbra_phone"><a
href=3D"callto:=
2015-09-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03
11</a>=
</span>:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client [<span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT376_com_zimbra_phone"><a
href=3D"callto:192=
.168.6.200"
data-mce-href=3D"callto:192.168.6.200">192.168.6.200</a></span>=
]::call getHardwareInfo with () {}<br>Thread-14::DEBUG::<span
class=3D"Obje=
ct" id=3D"OBJ_PREFIX_DWT377_com_zimbra_phone"><a
href=3D"callto:2015-09-03%=
2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03
11</a></span>:37:=
23,132::BindingXMLRPC::1140::vds::(wrapper) return getHardwareInfo with {'s=
tatus': {'message': 'Done', 'code': 0}, 'info':
{'systemProductName': 'PRIM=
ERGY RX2520 M1', 'systemSerialNumber': 'YLSK005705',
'systemFamily': 'SERVE=
R', 'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B=
278CE', 'systemManufacturer':
'FUJITSU'}}<br>Thread-14::DEBUG::<span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT378_com_zimbra_phone"><a
href=3D"callto:201=
5-09-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03
11</a></s=
pan>:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client [<span class=3D"=
Object" id=3D"OBJ_PREFIX_DWT379_com_zimbra_phone"><a
href=3D"callto:192.168=
.6.200"
data-mce-href=3D"callto:192.168.6.200">192.168.6.200</a></span>]::c=
all hostsList with () {} flowID [4acc5233]<br>Thread-14::ERROR::<span class=
=3D"Object" id=3D"OBJ_PREFIX_DWT380_com_zimbra_phone"><a
href=3D"callto:201=
5-09-03%2011" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03
11</a></s=
pan>:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm exception occured<=
br>Traceback (most recent call last):<br> File
"/usr/share/vdsm/rpc/B=
indingXMLRPC.py", line 1136, in wrapper<br> res
=3D f(*ar=
gs, **kwargs)<br> File "/usr/share/vdsm/gluster/api.py", line
54, in =
wrapper<br> rv =3D func(*args,
**kwargs)<br> File "=
/usr/share/vdsm/gluster/api.py", line 251, in
hostsList<br> &nbs=
p; return {'hosts': self.svdsmProxy.glusterPeerStatus()}<br> File
"/u=
sr/share/vdsm/supervdsm.py", line 50, in
__call__<br> ret=
urn callMethod()<br> File "/usr/share/vdsm/supervdsm.py", line
48, in=
<lambda><br>
**kwargs)<br> File "<string&=
gt;", line 2, in glusterPeerStatus<br> File
"/usr/lib64/python2.7/mul=
tiprocessing/managers.py", line 773, in
_callmethod<br> r=
aise convert_to_error(kind, result)<br>GlusterCmdExecFailedException: Comma=
nd execution failed<br>error: Connection failed. Please check if gluster da=
emon is operational.<br>return code:
1<br><br></div><div><br></div><div>sup=
ervdsm.log:</div><div>MainProcess|Thread-14::DEBUG::<span
class=3D"Object" =
id=3D"OBJ_PREFIX_DWT381_com_zimbra_phone"><a
href=3D"callto:2015-09-03%2011=
" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,1=
31::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper) call getHardw=
areInfo with () {}<br>MainProcess|Thread-14::DEBUG::<span
class=3D"Object" =
id=3D"OBJ_PREFIX_DWT382_com_zimbra_phone"><a
href=3D"callto:2015-09-03%2011=
" data-mce-href=3D"callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,1=
32::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper) return getHar=
dwareInfo with {'systemProductName': 'PRIMERGY RX2520 M1',
'systemSerialNum=
ber': 'YLSK005705', 'systemFamily': 'SERVER',
'systemVersion': 'GS01', 'sys=
temUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE',
'systemManufacturer': 'FU=
JITSU'}<br>MainProcess|Thread-14::DEBUG::<span class=3D"Object"
id=3D"OBJ_P=
REFIX_DWT383_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
data-mce-=
href=3D"callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,266::supervd=
smServer::102::SuperVdsm.ServerCallback::(wrapper) call wrapper with () {}<=
br>MainProcess|Thread-14::DEBUG::<span class=3D"Object"
id=3D"OBJ_PREFIX_DW=
T384_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
data-mce-href=3D"=
callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,267::utils::739::roo=
t::(execCmd) /usr/sbin/gluster --mode=3Dscript peer status --xml (cwd None)=
<br>MainProcess|Thread-14::DEBUG::<span class=3D"Object"
id=3D"OBJ_PREFIX_D=
WT385_com_zimbra_phone"><a href=3D"callto:2015-09-03%2011"
data-mce-href=3D=
"callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,278::utils::759::ro=
ot::(execCmd) FAILED: <err> =3D ''; <rc> =3D
1<br>MainProcess|T=
hread-14::ERROR::<span class=3D"Object"
id=3D"OBJ_PREFIX_DWT386_com_zimbra_=
phone"><a href=3D"callto:2015-09-03%2011"
data-mce-href=3D"callto:2015-09-0=
3%2011">2015-09-03
11</a></span>:37:23,279::supervdsmServer::106::SuperVdsm=
.ServerCallback::(wrapper) Error in wrapper<br>Traceback (most recent call =
last):<br> File "/usr/share/vdsm/supervdsmServer", line 104, in
wrapp=
er<br> res =3D func(*args,
**kwargs)<br> File "/usr=
/share/vdsm/supervdsmServer", line 414, in
wrapper<br> re=
turn func(*args, **kwargs)<br> File
"/usr/share/vdsm/gluster/__init__=
.py", line 31, in wrapper<br> return func(*args,
**kwargs=
)<br> File "/usr/share/vdsm/gluster/cli.py", line 909, in
peerStatus<=
br> xmltree =3D
_execGlusterXml(command)<br> File "=
/usr/share/vdsm/gluster/cli.py", line 90, in
_execGlusterXml<br>  =
; raise ge.GlusterCmdExecFailedException(rc, out, err)<br>GlusterCmdE=
xecFailedException: Command execution failed<br>error: Connection failed. P=
lease check if gluster daemon is operational.<br>return code:
1<br><br></di=
v><div><br></div><div><br></div><div>Any
idea?</div><div><br></div><div>Tha=
nks</div><div><br></div>Jos=C3=A9<br><div><br></div></div><div><br></div><d=
iv>-- <br></div><div><span
name=3D"x"></span><hr style=3D"width: 100%; heig=
ht: 2px;" data-mce-style=3D"width: 100%; height: 2px;">Jose
Ferradeira<br>h=
ttp://www.logicworks.pt<br><span
name=3D"x"></span><br></div></div></body><=
/html>
------=_Part_276953_1869596252.1441280421047--