[Users] oVirt 3.2 on CentOS with Gluster 3.3

Rob Zwissler rob at zwissler.org
Tue Mar 5 18:08:48 UTC 2013


On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg <danken at redhat.com> wrote:
> Rob,
>
> It seems that a bug in vdsm code is hiding the real issue.
> Could you do a
>
>     sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py
>
> restart vdsmd, and retry?
>
> Bala, would you send a patch fixing the ParseError issue (and adding a
> unit test that would have caught it on time)?
>
>
> Regards,
> Dan.

Hi Dan, thanks for the quick response.  I did that, and here's what I
get now from the vdsm.log:

MainProcess|Thread-51::DEBUG::2013-03-05
10:03:40,723::misc::84::Storage.Misc.excCmd::(<lambda>)
'/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
Thread-52::DEBUG::2013-03-05
10:03:40,731::task::568::TaskManager.Task::(_updateState)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::moving from state init ->
state preparing
Thread-52::INFO::2013-03-05
10:03:40,732::logUtils::41::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-52::INFO::2013-03-05
10:03:40,732::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'4af726ea-e502-4e79-a47c-6c8558ca96ad':
{'delay': '0.00584101676941', 'lastCheck': '0.2', 'code': 0, 'valid':
True}, 'fc0d44ec-528f-4bf9-8913-fa7043daf43b': {'delay':
'0.0503160953522', 'lastCheck': '0.2', 'code': 0, 'valid': True}}
Thread-52::DEBUG::2013-03-05
10:03:40,732::task::1151::TaskManager.Task::(prepare)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::finished:
{'4af726ea-e502-4e79-a47c-6c8558ca96ad': {'delay': '0.00584101676941',
'lastCheck': '0.2', 'code': 0, 'valid': True},
'fc0d44ec-528f-4bf9-8913-fa7043daf43b': {'delay': '0.0503160953522',
'lastCheck': '0.2', 'code': 0, 'valid': True}}
Thread-52::DEBUG::2013-03-05
10:03:40,732::task::568::TaskManager.Task::(_updateState)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::moving from state
preparing -> state finished
Thread-52::DEBUG::2013-03-05
10:03:40,732::resourceManager::830::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-52::DEBUG::2013-03-05
10:03:40,733::resourceManager::864::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-52::DEBUG::2013-03-05
10:03:40,733::task::957::TaskManager.Task::(_decref)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::ref 0 aborting False
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk vda latency not
available
GuestMonitor-xor-q-nis02::DEBUG::2013-03-05
10:03:40,750::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc stats not
available
GuestMonitor-xor-q-nis02::DEBUG::2013-03-05
10:03:40,750::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda stats not
available
GuestMonitor-xor-q-nis02::DEBUG::2013-03-05
10:03:40,750::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc latency not
available
GuestMonitor-xor-q-nis02::DEBUG::2013-03-05
10:03:40,750::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda latency not
available
MainProcess|Thread-51::DEBUG::2013-03-05
10:03:40,780::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err>
= ''; <rc> = 0
MainProcess|Thread-51::ERROR::2013-03-05
10:03:40,781::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/cli.py", line 430, in volumeInfo
    except (etree.ElementTree.ParseError, AttributeError, ValueError):
AttributeError: class ElementTree has no attribute 'ParseError'
GuestMonitor-pmgd-web01::DEBUG::2013-03-05
10:03:40,783::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc stats not
available
GuestMonitor-pmgd-web01::DEBUG::2013-03-05
10:03:40,783::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda stats not
available
GuestMonitor-pmgd-web01::DEBUG::2013-03-05
10:03:40,784::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc latency not
available
GuestMonitor-pmgd-web01::DEBUG::2013-03-05
10:03:40,784::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda latency not
available
GuestMonitor-xor-q-centreon01::DEBUG::2013-03-05
10:03:40,789::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc stats not
available
GuestMonitor-xor-d-ns01-2::DEBUG::2013-03-05
10:03:40,790::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc stats not
available
Thread-51::ERROR::2013-03-05
10:03:40,782::BindingXMLRPC::932::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper
    rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList
    return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
  File "/usr/share/vdsm/supervdsm.py", line 81, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda>
    **kwargs)
  File "<string>", line 2, in glusterVolumeInfo
  File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740,
in _callmethod
    raise convert_to_error(kind, result)
AttributeError: class ElementTree has no attribute 'ParseError'


And from the engine.log:


2013-03-05 10:08:00,647 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-14) START,
GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId =
b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 19971e4
2013-03-05 10:08:00,790 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-14) Failed in GlusterVolumesListVDS method
2013-03-05 10:08:00,790 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(QuartzScheduler_Worker-14) Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to
GlusterVolumesListVDS, error = Unexpected exception
2013-03-05 10:08:00,792 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase]
(QuartzScheduler_Worker-14) Command GlusterVolumesListVDS execution
failed. Exception: VDSErrorException: VDSGenericException:
VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected
exception
2013-03-05 10:08:00,793 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-14) FINISH, GlusterVolumesListVDSCommand, log
id: 19971e4
2013-03-05 10:08:00,794 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterManager]
(QuartzScheduler_Worker-14) Error while refreshing Gluster lightweight
data of cluster qa-cluster1!:
org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
GlusterVolumesListVDS, error = Unexpected exception
	at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168)
[engine-bll.jar:]
	at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33)
[engine-bll.jar:]
	at org.ovirt.engine.core.bll.gluster.GlusterManager.runVdsCommand(GlusterManager.java:258)
[engine-bll.jar:]
	at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:454)
[engine-bll.jar:]
	at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:440)
[engine-bll.jar:]
	at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshVolumeData(GlusterManager.java:411)
[engine-bll.jar:]
	at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshClusterData(GlusterManager.java:191)
[engine-bll.jar:]
	at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshLightWeightData(GlusterManager.java:170)
[engine-bll.jar:]
	at sun.reflect.GeneratedMethodAccessor150.invoke(Unknown Source)
[:1.7.0_09-icedtea]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_09-icedtea]
	at java.lang.reflect.Method.invoke(Method.java:601) [rt.jar:1.7.0_09-icedtea]
	at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60)
[engine-scheduler.jar:]
	at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz-2.1.2.jar:]
	at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
[quartz-2.1.2.jar:]


regards

Rob



More information about the Users mailing list