
On 02/03/2014 12:06 PM, Itamar Heim wrote:
On 02/03/2014 07:35 AM, Sahina Bose wrote:
On 02/03/2014 05:02 AM, Itamar Heim wrote:
On 02/02/2014 08:01 PM, Jon Archer wrote:
Hi All,
Constantly seeing this message in the logs: vdsm vds ERROR vdsm exception occured#012Traceback (most recent call last):#012 File "/usr/share/vdsm/BindingXMLRPC.py", line 952, in wrapper#012 res = f(*args, **kwargs)#012 File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper#012 rv = func(*args, **kwargs)#012 File "/usr/share/vdsm/gluster/api.py", line 306, in tasksList#012 status = self.svdsmProxy.glusterTasksList(taskIds)#012 File "/usr/share/vdsm/supervdsm.py", line 50, in __call__#012 return callMethod()#012 File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012 **kwargs)#012 File "<string>", line 2, in glusterTasksList#012 File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod#012 raise convert_to_error(kind, result)#012GlusterCmdExecFailedException: Command execution failed#012error: tasks is not a valid status option#012Usage: volume status [all | <VOLNAME> [nfs|shd|<BRICK>]] [detail|clients|mem|inode|fd|callpool]#012return code: 1
looks like an option which isn't recognised by the "gluster volume status" command.
Any ideas how to resolve? It's not causing any problems, but I would like to stop it.
Cheers
Jon _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
sahina - iirc, there is a patch removing that noise?
Yes, there was a patch removing this for clusters < 3.4 compatibility version
For 3.4 gluster clusters, we need a version of gluster (>= 3.5) to support the gluster async task feature. This version has the support for "gluster volume status tasks"
was this backported to stable 3.3 ?
Unfortunately, no - missed this. Have submitted a patch now - http://gerrit.ovirt.org/23982