[Users] oVirt 3.2 on CentOS with Gluster 3.3

Running CentOS 6.3 with the following VDSM packages from dre's repo: vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch vdsm-gluster-4.10.3-0.30.19.el6.noarch vdsm-python-4.10.3-0.30.19.el6.x86_64 vdsm-4.10.3-0.30.19.el6.x86_64 vdsm-cli-4.10.3-0.30.19.el6.noarch And the following gluster packages from the gluster repo: glusterfs-3.3.1-1.el6.x86_64 glusterfs-fuse-3.3.1-1.el6.x86_64 glusterfs-vim-3.2.7-1.el6.x86_64 glusterfs-server-3.3.1-1.el6.x86_64 I get the following errors in vdsm.log: Thread-1483::DEBUG::2013-03-04 16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client [10.33.9.73]::call volumesList with () {} MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,429::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None) MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,480::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-1483::ERROR::2013-03-04 16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 430, in volumeInfo except (etree.ParseError, AttributeError, ValueError): AttributeError: 'module' object has no attribute 'ParseError' Thread-1483::ERROR::2013-03-04 16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: 'module' object has no attribute 'ParseError' Which corresponds to the following in the engine.log: 2013-03-04 16:34:46,231 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-86) START, GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId = b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 987aef3 2013-03-04 16:34:46,365 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-86) Failed in GlusterVolumesListVDS method 2013-03-04 16:34:46,366 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-86) Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception 2013-03-04 16:34:46,367 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (QuartzScheduler_Worker-86) Command GlusterVolumesListVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception 2013-03-04 16:34:46,369 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-86) FINISH, GlusterVolumesListVDSCommand, log id: 987aef3 2013-03-04 16:34:46,370 ERROR [org.ovirt.engine.core.bll.gluster.GlusterManager] (QuartzScheduler_Worker-86) Error while refreshing Gluster lightweight data of cluster qa-cluster1!: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168) [engine-bll.jar:] at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.runVdsCommand(GlusterManager.java:258) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:454) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:440) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshVolumeData(GlusterManager.java:411) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshClusterData(GlusterManager.java:191) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshLightWeightData(GlusterManager.java:170) [engine-bll.jar:] at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source) [:1.7.0_09-icedtea] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_09-icedtea] at java.lang.reflect.Method.invoke(Method.java:601) [rt.jar:1.7.0_09-icedtea] at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [engine-scheduler.jar:] at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz-2.1.2.jar:] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz-2.1.2.jar:] And, long story short, the gluster integration with oVirt does not work. As per Vijay Bellur's comments at http://list-archives.org/2012/12/27/users-ovirt-org/continuing-my-ovirt-3-2-... this is due to a difference in the XML formatting output by gluster vs. what is expected by VDSM, and is fixed in Gluster 3.4, which is currently in alpha pre-release. So my question is, was oVirt v3.2 released with a dependency on a version of Gluster that is in alpha, or is there another workaround or fix for this? Rob

On Mon, Mar 04, 2013 at 04:38:50PM -0800, Rob Zwissler wrote:
Running CentOS 6.3 with the following VDSM packages from dre's repo:
vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch vdsm-gluster-4.10.3-0.30.19.el6.noarch vdsm-python-4.10.3-0.30.19.el6.x86_64 vdsm-4.10.3-0.30.19.el6.x86_64 vdsm-cli-4.10.3-0.30.19.el6.noarch
And the following gluster packages from the gluster repo:
glusterfs-3.3.1-1.el6.x86_64 glusterfs-fuse-3.3.1-1.el6.x86_64 glusterfs-vim-3.2.7-1.el6.x86_64 glusterfs-server-3.3.1-1.el6.x86_64
I get the following errors in vdsm.log:
Thread-1483::DEBUG::2013-03-04 16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client [10.33.9.73]::call volumesList with () {} MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,429::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None) MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,480::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-1483::ERROR::2013-03-04 16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 430, in volumeInfo except (etree.ParseError, AttributeError, ValueError): AttributeError: 'module' object has no attribute 'ParseError' Thread-1483::ERROR::2013-03-04 16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: 'module' object has no attribute 'ParseError'
Rob, It seems that a bug in vdsm code is hiding the real issue. Could you do a sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py restart vdsmd, and retry? Bala, would you send a patch fixing the ParseError issue (and adding a unit test that would have caught it on time)? Regards, Dan.

On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg <danken@redhat.com> wrote:
Rob,
It seems that a bug in vdsm code is hiding the real issue. Could you do a
sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py
restart vdsmd, and retry?
Bala, would you send a patch fixing the ParseError issue (and adding a unit test that would have caught it on time)?
Regards, Dan.
Hi Dan, thanks for the quick response. I did that, and here's what I get now from the vdsm.log: MainProcess|Thread-51::DEBUG::2013-03-05 10:03:40,723::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None) Thread-52::DEBUG::2013-03-05 10:03:40,731::task::568::TaskManager.Task::(_updateState) Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::moving from state init -> state preparing Thread-52::INFO::2013-03-05 10:03:40,732::logUtils::41::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-52::INFO::2013-03-05 10:03:40,732::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'4af726ea-e502-4e79-a47c-6c8558ca96ad': {'delay': '0.00584101676941', 'lastCheck': '0.2', 'code': 0, 'valid': True}, 'fc0d44ec-528f-4bf9-8913-fa7043daf43b': {'delay': '0.0503160953522', 'lastCheck': '0.2', 'code': 0, 'valid': True}} Thread-52::DEBUG::2013-03-05 10:03:40,732::task::1151::TaskManager.Task::(prepare) Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::finished: {'4af726ea-e502-4e79-a47c-6c8558ca96ad': {'delay': '0.00584101676941', 'lastCheck': '0.2', 'code': 0, 'valid': True}, 'fc0d44ec-528f-4bf9-8913-fa7043daf43b': {'delay': '0.0503160953522', 'lastCheck': '0.2', 'code': 0, 'valid': True}} Thread-52::DEBUG::2013-03-05 10:03:40,732::task::568::TaskManager.Task::(_updateState) Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::moving from state preparing -> state finished Thread-52::DEBUG::2013-03-05 10:03:40,732::resourceManager::830::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-52::DEBUG::2013-03-05 10:03:40,733::resourceManager::864::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-52::DEBUG::2013-03-05 10:03:40,733::task::957::TaskManager.Task::(_decref) Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::ref 0 aborting False Thread-53::DEBUG::2013-03-05 10:03:40,742::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc stats not available Thread-53::DEBUG::2013-03-05 10:03:40,742::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda stats not available Thread-53::DEBUG::2013-03-05 10:03:40,742::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc latency not available Thread-53::DEBUG::2013-03-05 10:03:40,742::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda latency not available Thread-53::DEBUG::2013-03-05 10:03:40,743::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc stats not available Thread-53::DEBUG::2013-03-05 10:03:40,743::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk vda stats not available Thread-53::DEBUG::2013-03-05 10:03:40,743::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc latency not available Thread-53::DEBUG::2013-03-05 10:03:40,743::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk vda latency not available Thread-53::DEBUG::2013-03-05 10:03:40,744::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc stats not available Thread-53::DEBUG::2013-03-05 10:03:40,744::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda stats not available Thread-53::DEBUG::2013-03-05 10:03:40,744::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc latency not available Thread-53::DEBUG::2013-03-05 10:03:40,744::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda latency not available Thread-53::DEBUG::2013-03-05 10:03:40,745::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc stats not available Thread-53::DEBUG::2013-03-05 10:03:40,745::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk vda stats not available Thread-53::DEBUG::2013-03-05 10:03:40,745::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc latency not available Thread-53::DEBUG::2013-03-05 10:03:40,745::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk vda latency not available GuestMonitor-xor-q-nis02::DEBUG::2013-03-05 10:03:40,750::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc stats not available GuestMonitor-xor-q-nis02::DEBUG::2013-03-05 10:03:40,750::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda stats not available GuestMonitor-xor-q-nis02::DEBUG::2013-03-05 10:03:40,750::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc latency not available GuestMonitor-xor-q-nis02::DEBUG::2013-03-05 10:03:40,750::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda latency not available MainProcess|Thread-51::DEBUG::2013-03-05 10:03:40,780::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-51::ERROR::2013-03-05 10:03:40,781::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 430, in volumeInfo except (etree.ElementTree.ParseError, AttributeError, ValueError): AttributeError: class ElementTree has no attribute 'ParseError' GuestMonitor-pmgd-web01::DEBUG::2013-03-05 10:03:40,783::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc stats not available GuestMonitor-pmgd-web01::DEBUG::2013-03-05 10:03:40,783::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda stats not available GuestMonitor-pmgd-web01::DEBUG::2013-03-05 10:03:40,784::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc latency not available GuestMonitor-pmgd-web01::DEBUG::2013-03-05 10:03:40,784::libvirtvm::308::vm.Vm::(_getDiskLatency) vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda latency not available GuestMonitor-xor-q-centreon01::DEBUG::2013-03-05 10:03:40,789::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc stats not available GuestMonitor-xor-d-ns01-2::DEBUG::2013-03-05 10:03:40,790::libvirtvm::269::vm.Vm::(_getDiskStats) vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc stats not available Thread-51::ERROR::2013-03-05 10:03:40,782::BindingXMLRPC::932::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: class ElementTree has no attribute 'ParseError' And from the engine.log: 2013-03-05 10:08:00,647 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-14) START, GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId = b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 19971e4 2013-03-05 10:08:00,790 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-14) Failed in GlusterVolumesListVDS method 2013-03-05 10:08:00,790 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-14) Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception 2013-03-05 10:08:00,792 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (QuartzScheduler_Worker-14) Command GlusterVolumesListVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception 2013-03-05 10:08:00,793 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-14) FINISH, GlusterVolumesListVDSCommand, log id: 19971e4 2013-03-05 10:08:00,794 ERROR [org.ovirt.engine.core.bll.gluster.GlusterManager] (QuartzScheduler_Worker-14) Error while refreshing Gluster lightweight data of cluster qa-cluster1!: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168) [engine-bll.jar:] at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.runVdsCommand(GlusterManager.java:258) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:454) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:440) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshVolumeData(GlusterManager.java:411) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshClusterData(GlusterManager.java:191) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshLightWeightData(GlusterManager.java:170) [engine-bll.jar:] at sun.reflect.GeneratedMethodAccessor150.invoke(Unknown Source) [:1.7.0_09-icedtea] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_09-icedtea] at java.lang.reflect.Method.invoke(Method.java:601) [rt.jar:1.7.0_09-icedtea] at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [engine-scheduler.jar:] at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz-2.1.2.jar:] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz-2.1.2.jar:] regards Rob

On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:
On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg <danken@redhat.com> wrote:
Rob,
It seems that a bug in vdsm code is hiding the real issue. Could you do a
sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py
restart vdsmd, and retry?
Bala, would you send a patch fixing the ParseError issue (and adding a unit test that would have caught it on time)?
Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: class ElementTree has no attribute 'ParseError'
My guess has led us nowhere, since etree.ParseError is simply missing from python 2.6. It is to be seen only in python 2.7! That's sad, but something *else* is problematic, since we got to this error-handling code. Could you make another try and temporarily replace ParseError with Exception? sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py (this sed is relative to the original code). Dan.

On 03/06/2013 01:20 PM, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:
On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg <danken@redhat.com> wrote:
Rob,
It seems that a bug in vdsm code is hiding the real issue. Could you do a
sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py
restart vdsmd, and retry?
Bala, would you send a patch fixing the ParseError issue (and adding a unit test that would have caught it on time)?
Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: class ElementTree has no attribute 'ParseError'
My guess has led us nowhere, since etree.ParseError is simply missing from python 2.6. It is to be seen only in python 2.7!
That's sad, but something *else* is problematic, since we got to this error-handling code.
Could you make another try and temporarily replace ParseError with Exception?
sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py
(this sed is relative to the original code).
More specific sed is sed -i s/etree.ParseError/SyntaxError/ /usr/share/vdsm/gluster/cli.py Regards, Bala

On Wed, Mar 06, 2013 at 02:34:10PM +0530, Balamurugan Arumugam wrote:
On 03/06/2013 01:20 PM, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:
On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg <danken@redhat.com> wrote:
Rob,
It seems that a bug in vdsm code is hiding the real issue. Could you do a
sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py
restart vdsmd, and retry?
Bala, would you send a patch fixing the ParseError issue (and adding a unit test that would have caught it on time)?
Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: class ElementTree has no attribute 'ParseError'
My guess has led us nowhere, since etree.ParseError is simply missing from python 2.6. It is to be seen only in python 2.7!
That's sad, but something *else* is problematic, since we got to this error-handling code.
Could you make another try and temporarily replace ParseError with Exception?
sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py
(this sed is relative to the original code).
More specific sed is sed -i s/etree.ParseError/SyntaxError/ /usr/share/vdsm/gluster/cli.py
Bala, Aravinda, I have not seem a vdsm patch adding an explicit dependency on the correct gluster-cli version. Only a change for for this ParseError issue http://gerrit.ovirt.org/#/c/12829/ Is there anything blocking this? I would really like to clear this hurdle quickly. Dan.

----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Balamurugan Arumugam" <barumuga@redhat.com> Cc: "Rob Zwissler" <rob@zwissler.org>, users@ovirt.org, "Aravinda VK" <avishwan@redhat.com>, "Ayal Baron" <abaron@redhat.com> Sent: Sunday, March 10, 2013 1:04:50 PM Subject: Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3
On Wed, Mar 06, 2013 at 02:34:10PM +0530, Balamurugan Arumugam wrote:
On 03/06/2013 01:20 PM, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:
On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg <danken@redhat.com> wrote:
Rob,
It seems that a bug in vdsm code is hiding the real issue. Could you do a
sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py
restart vdsmd, and retry?
Bala, would you send a patch fixing the ParseError issue (and adding a unit test that would have caught it on time)?
Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: class ElementTree has no attribute 'ParseError'
My guess has led us nowhere, since etree.ParseError is simply missing from python 2.6. It is to be seen only in python 2.7!
That's sad, but something *else* is problematic, since we got to this error-handling code.
Could you make another try and temporarily replace ParseError with Exception?
sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py
(this sed is relative to the original code).
More specific sed is sed -i s/etree.ParseError/SyntaxError/ /usr/share/vdsm/gluster/cli.py
Bala, Aravinda, I have not seem a vdsm patch adding an explicit dependency on the correct gluster-cli version. Only a change for for this ParseError issue http://gerrit.ovirt.org/#/c/12829/
Is there anything blocking this? I would really like to clear this hurdle quickly.
Dan, we are working out locally to resolve of setting up glusterfs version for vdsm-gluster. We will submit new patch soon. Regards, Bala

On Mon, Mar 11, 2013 at 06:09:56AM -0400, Balamurugan Arumugam wrote:
----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Balamurugan Arumugam" <barumuga@redhat.com> Cc: "Rob Zwissler" <rob@zwissler.org>, users@ovirt.org, "Aravinda VK" <avishwan@redhat.com>, "Ayal Baron" <abaron@redhat.com> Sent: Sunday, March 10, 2013 1:04:50 PM Subject: Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3
On Wed, Mar 06, 2013 at 02:34:10PM +0530, Balamurugan Arumugam wrote:
On 03/06/2013 01:20 PM, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:
On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg <danken@redhat.com> wrote:
Rob,
It seems that a bug in vdsm code is hiding the real issue. Could you do a
sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py
restart vdsmd, and retry?
Bala, would you send a patch fixing the ParseError issue (and adding a unit test that would have caught it on time)?
Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: class ElementTree has no attribute 'ParseError'
My guess has led us nowhere, since etree.ParseError is simply missing from python 2.6. It is to be seen only in python 2.7!
That's sad, but something *else* is problematic, since we got to this error-handling code.
Could you make another try and temporarily replace ParseError with Exception?
sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py
(this sed is relative to the original code).
More specific sed is sed -i s/etree.ParseError/SyntaxError/ /usr/share/vdsm/gluster/cli.py
Bala, Aravinda, I have not seem a vdsm patch adding an explicit dependency on the correct gluster-cli version. Only a change for for this ParseError issue http://gerrit.ovirt.org/#/c/12829/
Is there anything blocking this? I would really like to clear this hurdle quickly.
Dan, we are working out locally to resolve of setting up glusterfs version for vdsm-gluster. We will submit new patch soon.
I do not understand the complexity of this. Why not simply add Requires: glusterfs >= 3.4.0 ?

On Mon, Mar 11, 2013 at 12:34:51PM +0200, Dan Kenigsberg wrote:
On Mon, Mar 11, 2013 at 06:09:56AM -0400, Balamurugan Arumugam wrote:
>Rob, > >It seems that a bug in vdsm code is hiding the real issue. >Could you do a > > sed -i s/ParseError/ElementTree.ParseError > /usr/share/vdsm/gluster/cli.py > >restart vdsmd, and retry? > >Bala, would you send a patch fixing the ParseError issue (and >adding a
Ok, both issues have fixes which are in the ovirt-3.2 git branch. I believe this deserves a respin of vdsm, as having an undeclated requirement is impolite. Federico, Mike, would you take care for that? Dan.

----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Balamurugan Arumugam" <barumuga@redhat.com>, "Federico Simoncelli" <fsimonce@redhat.com>, "Mike Burns" <mburns@redhat.com> Cc: "Rob Zwissler" <rob@zwissler.org>, users@ovirt.org, arch@ovirt.org, "Aravinda VK" <avishwan@redhat.com>, "Ayal Baron" <abaron@redhat.com> Sent: Wednesday, March 13, 2013 9:03:39 PM Subject: Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3
On Mon, Mar 11, 2013 at 12:34:51PM +0200, Dan Kenigsberg wrote:
On Mon, Mar 11, 2013 at 06:09:56AM -0400, Balamurugan Arumugam wrote:
>>Rob, >> >>It seems that a bug in vdsm code is hiding the real issue. >>Could you do a >> >> sed -i s/ParseError/ElementTree.ParseError >> /usr/share/vdsm/gluster/cli.py >> >>restart vdsmd, and retry? >> >>Bala, would you send a patch fixing the ParseError issue >>(and >>adding a
Ok, both issues have fixes which are in the ovirt-3.2 git branch. I believe this deserves a respin of vdsm, as having an undeclated requirement is impolite.
Federico, Mike, would you take care for that?
Since we're at it... I have the feeling that this might be important enough to be backported to 3.2 too: http://gerrit.ovirt.org/#/c/12178/ -- Federico

----- Original Message -----
----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Balamurugan Arumugam" <barumuga@redhat.com>, "Federico Simoncelli" <fsimonce@redhat.com>, "Mike Burns" <mburns@redhat.com> Cc: "Rob Zwissler" <rob@zwissler.org>, users@ovirt.org, arch@ovirt.org, "Aravinda VK" <avishwan@redhat.com>, "Ayal Baron" <abaron@redhat.com> Sent: Wednesday, March 13, 2013 9:03:39 PM Subject: Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3
On Mon, Mar 11, 2013 at 12:34:51PM +0200, Dan Kenigsberg wrote:
On Mon, Mar 11, 2013 at 06:09:56AM -0400, Balamurugan Arumugam wrote:
>>>Rob, >>> >>>It seems that a bug in vdsm code is hiding the real >>>issue. >>>Could you do a >>> >>> sed -i s/ParseError/ElementTree.ParseError >>> /usr/share/vdsm/gluster/cli.py >>> >>>restart vdsmd, and retry? >>> >>>Bala, would you send a patch fixing the ParseError issue >>>(and >>>adding a
Ok, both issues have fixes which are in the ovirt-3.2 git branch. I believe this deserves a respin of vdsm, as having an undeclated requirement is impolite.
Federico, Mike, would you take care for that?
Since we're at it... I have the feeling that this might be important enough to be backported to 3.2 too:
Without a doubt!
-- Federico

On Wed, Mar 13, 2013 at 04:10:56PM -0400, Federico Simoncelli wrote:
----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Balamurugan Arumugam" <barumuga@redhat.com>, "Federico Simoncelli" <fsimonce@redhat.com>, "Mike Burns" <mburns@redhat.com> Cc: "Rob Zwissler" <rob@zwissler.org>, users@ovirt.org, arch@ovirt.org, "Aravinda VK" <avishwan@redhat.com>, "Ayal Baron" <abaron@redhat.com> Sent: Wednesday, March 13, 2013 9:03:39 PM Subject: Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3
On Mon, Mar 11, 2013 at 12:34:51PM +0200, Dan Kenigsberg wrote:
On Mon, Mar 11, 2013 at 06:09:56AM -0400, Balamurugan Arumugam wrote:
>>>Rob, >>> >>>It seems that a bug in vdsm code is hiding the real issue. >>>Could you do a >>> >>> sed -i s/ParseError/ElementTree.ParseError >>> /usr/share/vdsm/gluster/cli.py >>> >>>restart vdsmd, and retry? >>> >>>Bala, would you send a patch fixing the ParseError issue >>>(and >>>adding a
Ok, both issues have fixes which are in the ovirt-3.2 git branch. I believe this deserves a respin of vdsm, as having an undeclated requirement is impolite.
Federico, Mike, would you take care for that?
Since we're at it... I have the feeling that this might be important enough to be backported to 3.2 too:
Yes, it is quite horrible. Could you include that, too?

Hi, Sent a patch to handle ParseError attribute issue. vdsm still depends on newer(3.4) version of glusterfs, but Python ParseError is fixed. http://gerrit.ovirt.org/#/c/12752/ -- regards Aravinda On 03/06/2013 01:20 PM, Dan Kenigsberg wrote:
On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:
On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg <danken@redhat.com> wrote:
Rob,
It seems that a bug in vdsm code is hiding the real issue. Could you do a
sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py
restart vdsmd, and retry?
Bala, would you send a patch fixing the ParseError issue (and adding a unit test that would have caught it on time)? Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: class ElementTree has no attribute 'ParseError' My guess has led us nowhere, since etree.ParseError is simply missing from python 2.6. It is to be seen only in python 2.7!
That's sad, but something *else* is problematic, since we got to this error-handling code.
Could you make another try and temporarily replace ParseError with Exception?
sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py
(this sed is relative to the original code).
Dan.

On 03/05/2013 01:16 PM, Dan Kenigsberg wrote:
On Mon, Mar 04, 2013 at 04:38:50PM -0800, Rob Zwissler wrote:
Running CentOS 6.3 with the following VDSM packages from dre's repo:
vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch vdsm-gluster-4.10.3-0.30.19.el6.noarch vdsm-python-4.10.3-0.30.19.el6.x86_64 vdsm-4.10.3-0.30.19.el6.x86_64 vdsm-cli-4.10.3-0.30.19.el6.noarch
And the following gluster packages from the gluster repo:
glusterfs-3.3.1-1.el6.x86_64 glusterfs-fuse-3.3.1-1.el6.x86_64 glusterfs-vim-3.2.7-1.el6.x86_64 glusterfs-server-3.3.1-1.el6.x86_64
I get the following errors in vdsm.log:
Thread-1483::DEBUG::2013-03-04 16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client [10.33.9.73]::call volumesList with () {} MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,429::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None) MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,480::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-1483::ERROR::2013-03-04 16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 430, in volumeInfo except (etree.ParseError, AttributeError, ValueError): AttributeError: 'module' object has no attribute 'ParseError' Thread-1483::ERROR::2013-03-04 16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: 'module' object has no attribute 'ParseError'
Rob,
It seems that a bug in vdsm code is hiding the real issue. Could you do a
sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py
restart vdsmd, and retry?
Bala, would you send a patch fixing the ParseError issue (and adding a unit test that would have caught it on time)?
python 2.7 throws ParseError whereas python 2.6 throws SyntaxError. Aravinda is sending a fix for it. Regards, Bala

This is a multi-part message in MIME format. --------------090704010802090304070407 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 03/05/2013 06:08 AM, Rob Zwissler wrote:
Running CentOS 6.3 with the following VDSM packages from dre's repo:
vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch vdsm-gluster-4.10.3-0.30.19.el6.noarch vdsm-python-4.10.3-0.30.19.el6.x86_64 vdsm-4.10.3-0.30.19.el6.x86_64 vdsm-cli-4.10.3-0.30.19.el6.noarch
And the following gluster packages from the gluster repo:
glusterfs-3.3.1-1.el6.x86_64 glusterfs-fuse-3.3.1-1.el6.x86_64 glusterfs-vim-3.2.7-1.el6.x86_64 glusterfs-server-3.3.1-1.el6.x86_64
oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently in alpha and hence not available in stable repositories. http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/ This issue has been reported multiple times now, and I think it needs an update to the oVirt 3.2 release notes. Have added a note to this effect at: http://www.ovirt.org/OVirt_3.2_release_notes#Storage
I get the following errors in vdsm.log:
Thread-1483::DEBUG::2013-03-04 16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client [10.33.9.73]::call volumesList with () {} MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,429::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None) MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,480::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-1483::ERROR::2013-03-04 16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 430, in volumeInfo except (etree.ParseError, AttributeError, ValueError): AttributeError: 'module' object has no attribute 'ParseError' Thread-1483::ERROR::2013-03-04 16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: 'module' object has no attribute 'ParseError'
Which corresponds to the following in the engine.log:
2013-03-04 16:34:46,231 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-86) START, GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId = b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 987aef3 2013-03-04 16:34:46,365 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-86) Failed in GlusterVolumesListVDS method 2013-03-04 16:34:46,366 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-86) Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception 2013-03-04 16:34:46,367 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (QuartzScheduler_Worker-86) Command GlusterVolumesListVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception 2013-03-04 16:34:46,369 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-86) FINISH, GlusterVolumesListVDSCommand, log id: 987aef3 2013-03-04 16:34:46,370 ERROR [org.ovirt.engine.core.bll.gluster.GlusterManager] (QuartzScheduler_Worker-86) Error while refreshing Gluster lightweight data of cluster qa-cluster1!: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168) [engine-bll.jar:] at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.runVdsCommand(GlusterManager.java:258) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:454) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:440) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshVolumeData(GlusterManager.java:411) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshClusterData(GlusterManager.java:191) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshLightWeightData(GlusterManager.java:170) [engine-bll.jar:] at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source) [:1.7.0_09-icedtea] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_09-icedtea] at java.lang.reflect.Method.invoke(Method.java:601) [rt.jar:1.7.0_09-icedtea] at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [engine-scheduler.jar:] at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz-2.1.2.jar:] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz-2.1.2.jar:]
And, long story short, the gluster integration with oVirt does not work. As per Vijay Bellur's comments at http://list-archives.org/2012/12/27/users-ovirt-org/continuing-my-ovirt-3-2-... this is due to a difference in the XML formatting output by gluster vs. what is expected by VDSM, and is fixed in Gluster 3.4, which is currently in alpha pre-release.
So my question is, was oVirt v3.2 released with a dependency on a version of Gluster that is in alpha, or is there another workaround or fix for this?
Rob _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------090704010802090304070407 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 03/05/2013 06:08 AM, Rob Zwissler wrote:<br> </div> <blockquote cite="mid:CAFiW0ZK7an46ifiYA=iOHz-W46z5t4YhqYOr4yLnk743i06+UA@mail.gmail.com" type="cite"> <pre wrap="">Running CentOS 6.3 with the following VDSM packages from dre's repo: vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch vdsm-gluster-4.10.3-0.30.19.el6.noarch vdsm-python-4.10.3-0.30.19.el6.x86_64 vdsm-4.10.3-0.30.19.el6.x86_64 vdsm-cli-4.10.3-0.30.19.el6.noarch And the following gluster packages from the gluster repo: glusterfs-3.3.1-1.el6.x86_64 glusterfs-fuse-3.3.1-1.el6.x86_64 glusterfs-vim-3.2.7-1.el6.x86_64 glusterfs-server-3.3.1-1.el6.x86_64</pre> </blockquote> <br> oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently in alpha and hence not available in stable repositories.<br> <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> <a href="http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/">http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/</a><br> <br> This issue has been reported multiple times now, and I think it needs an update to the oVirt 3.2 release notes. Have added a note to this effect at:<br> <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> <a href="http://www.ovirt.org/OVirt_3.2_release_notes#Storage">http://www.ovirt.org/OVirt_3.2_release_notes#Storage</a><br> <br> <blockquote cite="mid:CAFiW0ZK7an46ifiYA=iOHz-W46z5t4YhqYOr4yLnk743i06+UA@mail.gmail.com" type="cite"> <pre wrap="">I get the following errors in vdsm.log: Thread-1483::DEBUG::2013-03-04 16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client [10.33.9.73]::call volumesList with () {} MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,429::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None) MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,480::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-1483::ERROR::2013-03-04 16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 430, in volumeInfo except (etree.ParseError, AttributeError, ValueError): AttributeError: 'module' object has no attribute 'ParseError' Thread-1483::ERROR::2013-03-04 16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: 'module' object has no attribute 'ParseError' Which corresponds to the following in the engine.log: 2013-03-04 16:34:46,231 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-86) START, GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId = b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 987aef3 2013-03-04 16:34:46,365 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-86) Failed in GlusterVolumesListVDS method 2013-03-04 16:34:46,366 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-86) Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception 2013-03-04 16:34:46,367 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (QuartzScheduler_Worker-86) Command GlusterVolumesListVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception 2013-03-04 16:34:46,369 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-86) FINISH, GlusterVolumesListVDSCommand, log id: 987aef3 2013-03-04 16:34:46,370 ERROR [org.ovirt.engine.core.bll.gluster.GlusterManager] (QuartzScheduler_Worker-86) Error while refreshing Gluster lightweight data of cluster qa-cluster1!: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168) [engine-bll.jar:] at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.runVdsCommand(GlusterManager.java:258) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:454) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:440) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshVolumeData(GlusterManager.java:411) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshClusterData(GlusterManager.java:191) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshLightWeightData(GlusterManager.java:170) [engine-bll.jar:] at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source) [:1.7.0_09-icedtea] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_09-icedtea] at java.lang.reflect.Method.invoke(Method.java:601) [rt.jar:1.7.0_09-icedtea] at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [engine-scheduler.jar:] at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz-2.1.2.jar:] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz-2.1.2.jar:] And, long story short, the gluster integration with oVirt does not work. As per Vijay Bellur's comments at <a class="moz-txt-link-freetext" href="http://list-archives.org/2012/12/27/users-ovirt-org/continuing-my-ovirt-3-2-nightlies-queste/f/7132789998">http://list-archives.org/2012/12/27/users-ovirt-org/continuing-my-ovirt-3-2-nightlies-queste/f/7132789998</a> this is due to a difference in the XML formatting output by gluster vs. what is expected by VDSM, and is fixed in Gluster 3.4, which is currently in alpha pre-release. So my question is, was oVirt v3.2 released with a dependency on a version of Gluster that is in alpha, or is there another workaround or fix for this? Rob _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------090704010802090304070407--

On Wed, Mar 06, 2013 at 02:04:29PM +0530, Shireesh Anjal wrote:
On 03/05/2013 06:08 AM, Rob Zwissler wrote:
Running CentOS 6.3 with the following VDSM packages from dre's repo:
vdsm-xmlrpc-4.10.3-0.30.19.el6.noarch vdsm-gluster-4.10.3-0.30.19.el6.noarch vdsm-python-4.10.3-0.30.19.el6.x86_64 vdsm-4.10.3-0.30.19.el6.x86_64 vdsm-cli-4.10.3-0.30.19.el6.noarch
And the following gluster packages from the gluster repo:
glusterfs-3.3.1-1.el6.x86_64 glusterfs-fuse-3.3.1-1.el6.x86_64 glusterfs-vim-3.2.7-1.el6.x86_64 glusterfs-server-3.3.1-1.el6.x86_64
oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently in alpha and hence not available in stable repositories. http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/
Shireesh, this should be specifed in vdsm.spec - please patch both master and ovirt-3.2 branches. Beyond that, there's a problem of Python 2.6 missing ParseError.
This issue has been reported multiple times now, and I think it needs an update to the oVirt 3.2 release notes. Have added a note to this effect at: http://www.ovirt.org/OVirt_3.2_release_notes#Storage
I get the following errors in vdsm.log:
Thread-1483::DEBUG::2013-03-04 16:35:27,427::BindingXMLRPC::913::vds::(wrapper) client [10.33.9.73]::call volumesList with () {} MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,429::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None) MainProcess|Thread-1483::DEBUG::2013-03-04 16:35:27,480::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-1483::ERROR::2013-03-04 16:35:27,480::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 430, in volumeInfo except (etree.ParseError, AttributeError, ValueError): AttributeError: 'module' object has no attribute 'ParseError' Thread-1483::ERROR::2013-03-04 16:35:27,481::BindingXMLRPC::932::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) AttributeError: 'module' object has no attribute 'ParseError'
Which corresponds to the following in the engine.log:
2013-03-04 16:34:46,231 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-86) START, GlusterVolumesListVDSCommand(HostName = xor-q-virt01, HostId = b342bf4d-d9e9-4055-b662-462dc2e6bf50), log id: 987aef3 2013-03-04 16:34:46,365 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-86) Failed in GlusterVolumesListVDS method 2013-03-04 16:34:46,366 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (QuartzScheduler_Worker-86) Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception 2013-03-04 16:34:46,367 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (QuartzScheduler_Worker-86) Command GlusterVolumesListVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception 2013-03-04 16:34:46,369 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-86) FINISH, GlusterVolumesListVDSCommand, log id: 987aef3 2013-03-04 16:34:46,370 ERROR [org.ovirt.engine.core.bll.gluster.GlusterManager] (QuartzScheduler_Worker-86) Error while refreshing Gluster lightweight data of cluster qa-cluster1!: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GlusterVolumesListVDS, error = Unexpected exception at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:168) [engine-bll.jar:] at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.runVdsCommand(GlusterManager.java:258) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:454) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.fetchVolumes(GlusterManager.java:440) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshVolumeData(GlusterManager.java:411) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshClusterData(GlusterManager.java:191) [engine-bll.jar:] at org.ovirt.engine.core.bll.gluster.GlusterManager.refreshLightWeightData(GlusterManager.java:170) [engine-bll.jar:] at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source) [:1.7.0_09-icedtea] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_09-icedtea] at java.lang.reflect.Method.invoke(Method.java:601) [rt.jar:1.7.0_09-icedtea] at org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60) [engine-scheduler.jar:] at org.quartz.core.JobRunShell.run(JobRunShell.java:213) [quartz-2.1.2.jar:] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) [quartz-2.1.2.jar:]
And, long story short, the gluster integration with oVirt does not work. As per Vijay Bellur's comments at http://list-archives.org/2012/12/27/users-ovirt-org/continuing-my-ovirt-3-2-... this is due to a difference in the XML formatting output by gluster vs. what is expected by VDSM, and is fixed in Gluster 3.4, which is currently in alpha pre-release.
So my question is, was oVirt v3.2 released with a dependency on a version of Gluster that is in alpha, or is there another workaround or fix for this?
Rob _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, Mar 6, 2013 at 12:34 AM, Shireesh Anjal <sanjal@redhat.com> wrote:
oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently in alpha and hence not available in stable repositories. http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/
This issue has been reported multiple times now, and I think it needs an update to the oVirt 3.2 release notes. Have added a note to this effect at: http://www.ovirt.org/OVirt_3.2_release_notes#Storage
On one hand I like oVirt, I think you guys have done a good job with this, and it is free software so I don't want to complain. But on the other hand, if you release a major/stable release (ie: oVirt 3.2), but it relies on a major/critical component (clustering filesystem server) that is in alpha, not even beta, but alpha prerelease form, you really should be up front and communicative about this. My searches turned up nothing except an offhand statement from a GlusterFS developer, nothing from the oVirt team until now. It is not acceptable to expect people to run something as critical as a cluster filesystem server in alpha form on anything short of a development test setup. Are any other components of oVirt 3.2 dependent on non-stable general release packages? What is the latest release of oVirt considered to be stable and considered safe for use on production systems? Rob

On 03/06/2013 10:29 PM, Rob Zwissler wrote:
On Wed, Mar 6, 2013 at 12:34 AM, Shireesh Anjal <sanjal@redhat.com> wrote:
oVirt 3.2 needs a newer (3.4.0) version of glusterfs, which is currently in alpha and hence not available in stable repositories. http://bits.gluster.org/pub/gluster/glusterfs/3.4.0alpha/
This issue has been reported multiple times now, and I think it needs an update to the oVirt 3.2 release notes. Have added a note to this effect at: http://www.ovirt.org/OVirt_3.2_release_notes#Storage
On one hand I like oVirt, I think you guys have done a good job with this, and it is free software so I don't want to complain.
But on the other hand, if you release a major/stable release (ie: oVirt 3.2), but it relies on a major/critical component (clustering filesystem server) that is in alpha, not even beta, but alpha prerelease form, you really should be up front and communicative about this. My searches turned up nothing except an offhand statement from a GlusterFS developer, nothing from the oVirt team until now.
It is not acceptable to expect people to run something as critical as a cluster filesystem server in alpha form on anything short of a development test setup. Are any other components of oVirt 3.2 dependent on non-stable general release packages?
What is the latest release of oVirt considered to be stable and considered safe for use on production systems?
Hi Rob, Your points are completely valid, and it's my fault (and not the oVirt release team's) not mentioning this important information when providing details of gluster related features to be included in the oVirt 3.2 release notes. Genuine apologies for the same. Having said this, I believe the stable release of glusterfs 3.4.0 should be coming out very soon (some time this month if I'm correct), which will provide some relief. Regards, Shireesh
Rob

Hi Rob, On 03/06/2013 05:59 PM, Rob Zwissler wrote:
On one hand I like oVirt, I think you guys have done a good job with this, and it is free software so I don't want to complain.
But on the other hand, if you release a major/stable release (ie: oVirt 3.2), but it relies on a major/critical component (clustering filesystem server) that is in alpha, not even beta, but alpha prerelease form, you really should be up front and communicative about this. My searches turned up nothing except an offhand statement from a GlusterFS developer, nothing from the oVirt team until now.
It is not acceptable to expect people to run something as critical as a cluster filesystem server in alpha form on anything short of a development test setup. Are any other components of oVirt 3.2 dependent on non-stable general release packages?
What is the latest release of oVirt considered to be stable and considered safe for use on production systems?
It seems like there has been conflation of two things here - I may be wrong with what I say, but having checked, I do not believe so. With oVirt 3.2/Gluster 3.4, you will be able to manage Gluster clusters using the oVirt engine. This is a completely new integration, which is still not in a production Gluster release. However, it is still completely fine to use Gluster as storage for an oVirt 3.1 or 3.2 managed cluster. The ability to use Gluster easily as a storage back-end was added in oVirt 3.1, and as far as I know, there is no problem using glusterfs 3.3 as a POSIX storage filesystem for oVirt 3.2. Vijay, Shireesh, Ayal, is my understanding correct? I am worried that we've been giving people the wrong impression here. Thanks! Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13

On 03/07/2013 04:36 PM, Dave Neary wrote:
Hi Rob,
On 03/06/2013 05:59 PM, Rob Zwissler wrote:
On one hand I like oVirt, I think you guys have done a good job with this, and it is free software so I don't want to complain.
But on the other hand, if you release a major/stable release (ie: oVirt 3.2), but it relies on a major/critical component (clustering filesystem server) that is in alpha, not even beta, but alpha prerelease form, you really should be up front and communicative about this. My searches turned up nothing except an offhand statement from a GlusterFS developer, nothing from the oVirt team until now.
It is not acceptable to expect people to run something as critical as a cluster filesystem server in alpha form on anything short of a development test setup. Are any other components of oVirt 3.2 dependent on non-stable general release packages?
What is the latest release of oVirt considered to be stable and considered safe for use on production systems?
It seems like there has been conflation of two things here - I may be wrong with what I say, but having checked, I do not believe so.
With oVirt 3.2/Gluster 3.4, you will be able to manage Gluster clusters using the oVirt engine. This is a completely new integration, which is still not in a production Gluster release.
However, it is still completely fine to use Gluster as storage for an oVirt 3.1 or 3.2 managed cluster. The ability to use Gluster easily as a storage back-end was added in oVirt 3.1, and as far as I know, there is no problem using glusterfs 3.3 as a POSIX storage filesystem for oVirt 3.2.
Vijay, Shireesh, Ayal, is my understanding correct? I am worried that we've been giving people the wrong impression here.
Yes, your description is right. Thanks, Vijay
participants (9)
-
Aravinda
-
Ayal Baron
-
Balamurugan Arumugam
-
Dan Kenigsberg
-
Dave Neary
-
Federico Simoncelli
-
Rob Zwissler
-
Shireesh Anjal
-
Vijay Bellur