This is a multi-part message in MIME format.
--------------000904010305030500000100
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 8bit
Can u post the output of 'gluster peer status' on the gluster node?
Regards,
Ramesh
On 09/03/2015 05:10 PM, suporte(a)logicworks.pt wrote:
Hi,
I just installed Version 3.5.3.1-1.el7.centos, on centos 7.1, no HE.
for storage, I have only one server with glusterfs:
glusterfs-fuse-3.7.3-1.el7.x86_64
glusterfs-server-3.7.3-1.el7.x86_64
glusterfs-libs-3.7.3-1.el7.x86_64
glusterfs-client-xlators-3.7.3-1.el7.x86_64
glusterfs-api-3.7.3-1.el7.x86_64
glusterfs-3.7.3-1.el7.x86_64
glusterfs-cli-3.7.3-1.el7.x86_64
# service glusterd status
Redirecting to /bin/systemctl status glusterd.service
glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
Active: active (running) since Thu 2015-09-03 11
<callto:2015-09-03%2011>:23:32 WEST; 10min ago
Process: 1153 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
(code=exited, status=0/SUCCESS)
Main PID: 1387 (glusterd)
CGroup: /system.slice/glusterd.service
ââ1387 /usr/sbin/glusterd -p /var/run/glusterd.pid
ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt --volfile-id
gv0.gfs...
Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting GlusterFS, a
clustered f....
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started GlusterFS, a
clustered fi....
Hint: Some lines were ellipsized, use -l to show in full.
Everything was running until I need to restart the node (host), after
that I was not ables to make the host active. This is the error message:
Gluster command [<UNKNOWN>] failed on server
I also disable JSON protocol, but no success
vdsm.log:
Thread-14::DEBUG::2015-09-03 11
<callto:2015-09-03%2011>:37:23,131::BindingXMLRPC::1133::vds::(wrapper) client
[192.168.6.200 <callto:192.168.6.200>]::call getHardwareInfo with () {}
Thread-14::DEBUG::2015-09-03 11
<callto:2015-09-03%2011>:37:23,132::BindingXMLRPC::1140::vds::(wrapper) return
getHardwareInfo with {'status': {'message': 'Done',
'code': 0},
'info': {'systemProductName': 'PRIMERGY RX2520 M1',
'systemSerialNumber': 'YLSK005705', 'systemFamily':
'SERVER',
'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU'}}
Thread-14::DEBUG::2015-09-03 11
<callto:2015-09-03%2011>:37:23,266::BindingXMLRPC::1133::vds::(wrapper) client
[192.168.6.200 <callto:192.168.6.200>]::call hostsList with () {}
flowID [4acc5233]
Thread-14::ERROR::2015-09-03 11
<callto:2015-09-03%2011>:37:23,279::BindingXMLRPC::1149::vds::(wrapper) vdsm
exception occured
Traceback (most recent call last):
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in glusterPeerStatus
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1
supervdsm.log:
MainProcess|Thread-14::DEBUG::2015-09-03 11
<callto:2015-09-03%2011>:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call getHardwareInfo with () {}
MainProcess|Thread-14::DEBUG::2015-09-03 11
<callto:2015-09-03%2011>:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return getHardwareInfo with {'systemProductName': 'PRIMERGY RX2520
M1', 'systemSerialNumber': 'YLSK005705', 'systemFamily':
'SERVER',
'systemVersion': 'GS01', 'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE', 'systemManufacturer':
'FUJITSU'}
MainProcess|Thread-14::DEBUG::2015-09-03 11
<callto:2015-09-03%2011>:37:23,266::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call wrapper with () {}
MainProcess|Thread-14::DEBUG::2015-09-03 11
<callto:2015-09-03%2011>:37:23,267::utils::739::root::(execCmd)
/usr/sbin/gluster --mode=script peer status --xml (cwd None)
MainProcess|Thread-14::DEBUG::2015-09-03 11
<callto:2015-09-03%2011>:37:23,278::utils::759::root::(execCmd)
FAILED: <err> = ''; <rc> = 1
MainProcess|Thread-14::ERROR::2015-09-03 11
<callto:2015-09-03%2011>:37:23,279::supervdsmServer::106::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
Traceback (most recent call last):
File "/usr/share/vdsm/supervdsmServer", line 104, in wrapper
res = func(*args, **kwargs)
File "/usr/share/vdsm/supervdsmServer", line 414, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/gluster/__init__.py", line 31, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/gluster/cli.py", line 909, in peerStatus
xmltree = _execGlusterXml(command)
File "/usr/share/vdsm/gluster/cli.py", line 90, in _execGlusterXml
raise ge.GlusterCmdExecFailedException(rc, out, err)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1
Any idea?
Thanks
José
--
------------------------------------------------------------------------
Jose Ferradeira
http://www.logicworks.pt
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--------------000904010305030500000100
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Can u post the output of 'gluster peer status' on the gluster node?<br>
<br>
Regards,<br>
Ramesh<br>
<br>
<div class="moz-cite-prefix">On 09/03/2015 05:10 PM,
<a class="moz-txt-link-abbreviated"
href="mailto:suporte@logicworks.pt">suporte@logicworks.pt</a>
wrote:<br>
</div>
<blockquote
cite="mid:323484460.276954.1441280421052.JavaMail.zimbra@logicworks.pt"
type="cite">
<div style="font-family: Times New Roman; font-size: 10pt; color:
#000000">
<div>
<div>Hi,</div>
<div><br>
</div>
<div>I just installed <span class="version-text">Version
3.5.3.1-1.el7.centos</span>, on centos 7.1, no HE.</div>
<div><br>
</div>
<div>for storage, I have only one server with glusterfs:</div>
<div>glusterfs-fuse-3.7.3-1.el7.x86_64<br>
glusterfs-server-3.7.3-1.el7.x86_64<br>
glusterfs-libs-3.7.3-1.el7.x86_64<br>
glusterfs-client-xlators-3.7.3-1.el7.x86_64<br>
glusterfs-api-3.7.3-1.el7.x86_64<br>
glusterfs-3.7.3-1.el7.x86_64<br>
glusterfs-cli-3.7.3-1.el7.x86_64<br>
<br>
</div>
<div># service glusterd status<br>
Redirecting to /bin/systemctl status glusterd.service<br>
glusterd.service - GlusterFS, a clustered file-system server<br>
Loaded: loaded (/usr/lib/systemd/system/glusterd.service;
enabled)<br>
Active: active (running) since Thu <span class="Object"
id="OBJ_PREFIX_DWT374_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:23:32
WEST; 10min ago<br>
Process: 1153 ExecStart=/usr/sbin/glusterd -p
/var/run/glusterd.pid (code=exited, status=0/SUCCESS)<br>
Main PID: 1387 (glusterd)<br>
CGroup: /system.slice/glusterd.service<br>
ââ1387 /usr/sbin/glusterd -p
/var/run/glusterd.pid<br>
ââ2314 /usr/sbin/glusterfsd -s gfs3.acloud.pt
--volfile-id gv0.gfs...<br>
<br>
Sep 03 11:23:31 gfs3.domain.pt systemd[1]: Starting
GlusterFS, a clustered f....<br>
Sep 03 11:23:32 gfs3.domain.pt systemd[1]: Started
GlusterFS, a clustered fi....<br>
Hint: Some lines were ellipsized, use -l to show in full.<br>
<br>
</div>
<div><br>
</div>
<div>Everything was running until I need to restart the node
(host), after that I was not ables to make the host active.
This is the error message:</div>
<div>Gluster command [<UNKNOWN>] failed on
server</div>
<div><br>
</div>
<div><br>
</div>
<div>I also disable JSON protocol, but no success</div>
<div><br>
</div>
<div>vdsm.log:</div>
<div>Thread-14::DEBUG::<span class="Object"
id="OBJ_PREFIX_DWT375_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,131::BindingXMLRPC::1133::vds::(wrapper)
client [<span class="Object"
id="OBJ_PREFIX_DWT376_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:192.168.6.200"
data-mce-href="callto:192.168.6.200">192.168.6.200</a></span>]::call
getHardwareInfo with () {}<br>
Thread-14::DEBUG::<span class="Object"
id="OBJ_PREFIX_DWT377_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,132::BindingXMLRPC::1140::vds::(wrapper)
return getHardwareInfo with {'status': {'message':
'Done',
'code': 0}, 'info': {'systemProductName':
'PRIMERGY RX2520
M1', 'systemSerialNumber': 'YLSK005705',
'systemFamily':
'SERVER', 'systemVersion': 'GS01',
'systemUUID':
'4600EA20-2BFF-B34F-B607-DBF9F6B278CE',
'systemManufacturer': 'FUJITSU'}}<br>
Thread-14::DEBUG::<span class="Object"
id="OBJ_PREFIX_DWT378_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,266::BindingXMLRPC::1133::vds::(wrapper)
client [<span class="Object"
id="OBJ_PREFIX_DWT379_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:192.168.6.200"
data-mce-href="callto:192.168.6.200">192.168.6.200</a></span>]::call
hostsList with () {} flowID [4acc5233]<br>
Thread-14::ERROR::<span class="Object"
id="OBJ_PREFIX_DWT380_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,279::BindingXMLRPC::1149::vds::(wrapper)
vdsm exception occured<br>
Traceback (most recent call last):<br>
File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1136, in
wrapper<br>
res = f(*args, **kwargs)<br>
File "/usr/share/vdsm/gluster/api.py", line 54, in
wrapper<br>
rv = func(*args, **kwargs)<br>
File "/usr/share/vdsm/gluster/api.py", line 251, in
hostsList<br>
return {'hosts': self.svdsmProxy.glusterPeerStatus()}<br>
File "/usr/share/vdsm/supervdsm.py", line 50, in
__call__<br>
return callMethod()<br>
File "/usr/share/vdsm/supervdsm.py", line 48, in
<lambda><br>
**kwargs)<br>
File "<string>", line 2, in
glusterPeerStatus<br>
File "/usr/lib64/python2.7/multiprocessing/managers.py",
line 773, in _callmethod<br>
raise convert_to_error(kind, result)<br>
GlusterCmdExecFailedException: Command execution failed<br>
error: Connection failed. Please check if gluster daemon is
operational.<br>
return code: 1<br>
<br>
</div>
<div><br>
</div>
<div>supervdsm.log:</div>
<div>MainProcess|Thread-14::DEBUG::<span class="Object"
id="OBJ_PREFIX_DWT381_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,131::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call getHardwareInfo with () {}<br>
MainProcess|Thread-14::DEBUG::<span class="Object"
id="OBJ_PREFIX_DWT382_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,132::supervdsmServer::109::SuperVdsm.ServerCallback::(wrapper)
return getHardwareInfo with {'systemProductName': 'PRIMERGY
RX2520 M1', 'systemSerialNumber': 'YLSK005705',
'systemFamily': 'SERVER', 'systemVersion':
'GS01',
'systemUUID': '4600EA20-2BFF-B34F-B607-DBF9F6B278CE',
'systemManufacturer': 'FUJITSU'}<br>
MainProcess|Thread-14::DEBUG::<span class="Object"
id="OBJ_PREFIX_DWT383_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,266::supervdsmServer::102::SuperVdsm.ServerCallback::(wrapper)
call wrapper with () {}<br>
MainProcess|Thread-14::DEBUG::<span class="Object"
id="OBJ_PREFIX_DWT384_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,267::utils::739::root::(execCmd)
/usr/sbin/gluster --mode=script peer status --xml (cwd None)<br>
MainProcess|Thread-14::DEBUG::<span class="Object"
id="OBJ_PREFIX_DWT385_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,278::utils::759::root::(execCmd)
FAILED: <err> = ''; <rc> = 1<br>
MainProcess|Thread-14::ERROR::<span class="Object"
id="OBJ_PREFIX_DWT386_com_zimbra_phone"><a
moz-do-not-send="true" href="callto:2015-09-03%2011"
data-mce-href="callto:2015-09-03%2011">2015-09-03
11</a></span>:37:23,279::supervdsmServer::106::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper<br>
Traceback (most recent call last):<br>
File "/usr/share/vdsm/supervdsmServer", line 104, in
wrapper<br>
res = func(*args, **kwargs)<br>
File "/usr/share/vdsm/supervdsmServer", line 414, in
wrapper<br>
return func(*args, **kwargs)<br>
File "/usr/share/vdsm/gluster/__init__.py", line 31, in
wrapper<br>
return func(*args, **kwargs)<br>
File "/usr/share/vdsm/gluster/cli.py", line 909, in
peerStatus<br>
xmltree = _execGlusterXml(command)<br>
File "/usr/share/vdsm/gluster/cli.py", line 90, in
_execGlusterXml<br>
raise ge.GlusterCmdExecFailedException(rc, out, err)<br>
GlusterCmdExecFailedException: Command execution failed<br>
error: Connection failed. Please check if gluster daemon is
operational.<br>
return code: 1<br>
<br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>Any idea?</div>
<div><br>
</div>
<div>Thanks</div>
<div><br>
</div>
José<br>
<div><br>
</div>
</div>
<div><br>
</div>
<div>-- <br>
</div>
<div><span name="x"></span>
<hr style="width: 100%; height: 2px;" data-mce-style="width:
100%; height: 2px;">Jose Ferradeira<br>
<a class="moz-txt-link-freetext"
href="http://www.logicworks.pt">http://www.logicworks.pt</a><br>
<span name="x"></span><br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Users mailing list
<a class="moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org">Users@ovirt.org</a>
<a class="moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
</pre>
</blockquote>
<br>
</body>
</html>
--------------000904010305030500000100--