Hello!
1 engine- not hosted FC 19 + ovirt 3.5 ;
3 nodes - F20 and ovirt 3.5 patternfly due to high vdsm memory usage. All
theree nodes serves as glusterfs server - 3 replicated bricks.
After putting DC in maintenance - all VM shutdown, storages in maintenance
(ISO, Data, Export) , stop gluster volume and finally hosts in maintenance
, runned "yum update" on all nodes, reboot them, "Confirm host has been
rebooted" but trying to activate them raise "Gluster command [<UNKNOWN>]
failed on server....." and hosts goes in non operational.
vdsm log goes like:
Thread-15::ERROR::2014-12-18
10:17:53,439::__init__::493::jsonrpc.JsonRpcServer::(_serveRequest)
Internal server error
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 488,
in _serveRequest
res = method(**params)
File "/usr/share/vdsm/rpc/Bridge.py", line 264, in _dynamicMethod
result = fn(*methodArgs)
File "/usr/share/vdsm/gluster/apiwrapper.py", line 79, in list
return self._gluster.hostsList()
File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper
rv = func(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList
return {'hosts': self.svdsmProxy.glusterPeerStatus()}
File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
**kwargs)
File "<string>", line 2, in glusterPeerStatus
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod
raise convert_to_error(kind, result)
GlusterCmdExecFailedException: Command execution failed
error: Connection failed. Please check if gluster daemon is operational.
return code: 1
*NB*
*Reinstalling all host after moving in Maintenance from Non-op avtivates
them and I am able to start gluster volume, activate stoarage domains and
so on.......*
<
https://bugzilla.redhat.com/show_bug.cgi?id=1142647>