
Yesterday was testday but not much fun :-( I had a reasonable working setup but for testday I decided to start from scratch and that ended rather soon. Installing and configuring engine was not a problem but I want a setup where I have two gluster hosts and two hosts as vmhosts. I added a second cluster using the webinterface set it to gluster storage and added two minimal installed Fedora 18 hosts where I setup static networking and verified that it worked. Adding the two hosts went OK but adding a Volume gives the following error on engine: 2013-02-01 09:32:39,084 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-4) [5ea886d] Running command: CreateGlusterVolumeCommand internal: false. Entities affected : ID: 8720debc-a184-4b61-9fa8-0fdf4d339b9a Type: VdsGroups 2013-02-01 09:32:39,117 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-4) [5ea886d] START, CreateGlusterVolumeVDSCommand(HostName = st02, HostId = e7b74172-2f95-43cb-83ff-11705ae24265), log id: 4270f4ef 2013-02-01 09:32:39,246 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc [mCode=4106, mMessage=XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> ] 2013-02-01 09:32:39,248 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc [mCode=4106, mMessage=XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> ] 2013-02-01 09:32:39,249 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Failed in CreateGlusterVolumeVDS method 2013-02-01 09:32:39,250 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, error = XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> 2013-02-01 09:32:39,254 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Command CreateGlusterVolumeVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, error = XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> 2013-02-01 09:32:39,255 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-4) [5ea886d] FINISH, CreateGlusterVolumeVDSCommand, log id: 4270f4ef 2013-02-01 09:32:39,256 ERROR [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-4) [5ea886d] Command org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, error = XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> 2013-02-01 09:32:39,268 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-4) [5ea886d] Lock freed to object EngineLock [exclusiveLocks= key: 8720debc-a184-4b61-9fa8-0fdf4d339b9a value: GLUSTER , sharedLocks= ] 2013-02-01 09:32:40,902 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-85) START, GlusterVolumesListVDSCommand(HostName = st02, HostId = e7b74172-2f95-43cb-83ff-11705ae24265), log id: 61cafb32 And on ST01 the following in vdsm.log Thread-644::DEBUG::2013-02-01 10:24:06,378::BindingXMLRPC::913::vds::(wrapper) client [192.168.216.150]::call volumeCreate with ('GlusterData', ['st01.nieuwland.nl:/home/gluster-data', 'st02.nieuwland.nl:/home/gluster-data'], 2, 0, ['TCP']) {} MainProcess|Thread-644::DEBUG::2013-02-01 10:24:06,381::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume create GlusterData replica 2 transport TCP st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data --xml' (cwd None) MainProcess|Thread-644::DEBUG::2013-02-01 10:24:06,639::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-644::ERROR::2013-02-01 10:24:06,640::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 446, in volumeCreate xmltree = _execGlusterXml(command) File "/usr/share/vdsm/gluster/cli.py", line 89, in _execGlusterXml raise ge.GlusterXmlErrorException(err=out) GlusterXmlErrorException: XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> Thread-644::ERROR::2013-02-01 10:24:06,655::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 63, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result) GlusterXmlErrorException: XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> And on ST02 I get this error: MainProcess|Thread-93::DEBUG::2013-02-01 10:24:09,540::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script peer status --xml' (cwd None) MainProcess|Thread-93::DEBUG::2013-02-01 10:24:09,622::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-93::DEBUG::2013-02-01 10:24:09,624::BindingXMLRPC::920::vds::(wrapper) return hostsList with {'status': {'message': 'Done', 'code': 0}, 'hosts': [{'status': 'CONNECTED', 'hostname': '192.168.216.152', 'uuid': '15c7a739-6735-43f5-a1c0-3c7ff3469588'}, {'status': 'CONNECTED', 'hostname': '192.168.216.151', 'uuid': 'd53f4dcb-1116-4fbc-8d5b-e882175264aa'}]} Thread-94::DEBUG::2013-02-01 10:24:09,639::BindingXMLRPC::913::vds::(wrapper) client [192.168.216.150]::call volumesList with () {} MainProcess|Thread-94::DEBUG::2013-02-01 10:24:09,641::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None) MainProcess|Thread-94::DEBUG::2013-02-01 10:24:09,724::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-94::ERROR::2013-02-01 10:24:09,725::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 431, in volumeInfo raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)]) GlusterXmlErrorException: XML error error: <cliOutput><opRet>0</opRet><opErrno>0</opErrno><opErrstr /><volInfo><volumes><volume><name>GlusterData</name><id>e5cf9724-ecb0-41a3-abdf-0ea891e49f92</id><type>2</type><status>0</status><brickCount>2</brickCount><distCount>2</distCount><stripeCount>1</stripeCount><replicaCount>2</replicaCount><transport>0</transport><bricks><brick>st01.nieuwland.nl:/home/gluster-data</brick><brick>st02.nieuwland.nl:/home/gluster-data</brick></bricks><optCount>0</optCount><options /></volume><count>1</count></volumes></volInfo></cliOutput> Thread-94::ERROR::2013-02-01 10:24:09,741::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result) GlusterXmlErrorException: XML error I have included a lot more logs in the attached zip. Running gluster volume info shows me on both server the correct data. The directory is created under /home, SELinux was enforcing but permissive gives the same error. [root@st01 ~]# rpm -aq | grep vdsm vdsm-gluster-4.10.3-6.fc18.noarch vdsm-xmlrpc-4.10.3-6.fc18.noarch vdsm-cli-4.10.3-6.fc18.noarch vdsm-python-4.10.3-6.fc18.x86_64 vdsm-4.10.3-6.fc18.x86_64 [root@st01 ~]# rpm -aq | grep gluster glusterfs-fuse-3.3.1-8.fc18.x86_64 vdsm-gluster-4.10.3-6.fc18.noarch glusterfs-3.3.1-8.fc18.x86_64 glusterfs-server-3.3.1-8.fc18.x86_64 libvirt-0.10.2.2-3.fc18.x86_64 + deps same version qemu-kvm-1.2.2-2.fc18.x86_64 + deps same version Anything else I can provide do to fix this problem? Joop -- irc: jvandewege @fosdem ;-) Anything else?

This is a multi-part message in MIME format. --------------050800060002060103080409 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hi Joop, Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster. Can you update the glusterfs to http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and check it out? Thanks, Kanagaraj On 02/01/2013 03:23 PM, Joop wrote:
Yesterday was testday but not much fun :-(
I had a reasonable working setup but for testday I decided to start from scratch and that ended rather soon. Installing and configuring engine was not a problem but I want a setup where I have two gluster hosts and two hosts as vmhosts. I added a second cluster using the webinterface set it to gluster storage and added two minimal installed Fedora 18 hosts where I setup static networking and verified that it worked. Adding the two hosts went OK but adding a Volume gives the following error on engine:
2013-02-01 09:32:39,084 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-4) [5ea886d] Running command: CreateGlusterVolumeCommand internal: false. Entities affected : ID: 8720debc-a184-4b61-9fa8-0fdf4d339b9a Type: VdsGroups 2013-02-01 09:32:39,117 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-4) [5ea886d] START, CreateGlusterVolumeVDSCommand(HostName = st02, HostId = e7b74172-2f95-43cb-83ff-11705ae24265), log id: 4270f4ef 2013-02-01 09:32:39,246 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc [mCode=4106, mMessage=XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> ] 2013-02-01 09:32:39,248 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc [mCode=4106, mMessage=XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> ] 2013-02-01 09:32:39,249 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Failed in CreateGlusterVolumeVDS method 2013-02-01 09:32:39,250 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, error = XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
2013-02-01 09:32:39,254 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Command CreateGlusterVolumeVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, error = XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
2013-02-01 09:32:39,255 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-4) [5ea886d] FINISH, CreateGlusterVolumeVDSCommand, log id: 4270f4ef 2013-02-01 09:32:39,256 ERROR [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-4) [5ea886d] Command org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, error = XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
2013-02-01 09:32:39,268 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-4) [5ea886d] Lock freed to object EngineLock [exclusiveLocks= key: 8720debc-a184-4b61-9fa8-0fdf4d339b9a value: GLUSTER , sharedLocks= ] 2013-02-01 09:32:40,902 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-85) START, GlusterVolumesListVDSCommand(HostName = st02, HostId = e7b74172-2f95-43cb-83ff-11705ae24265), log id: 61cafb32
And on ST01 the following in vdsm.log Thread-644::DEBUG::2013-02-01 10:24:06,378::BindingXMLRPC::913::vds::(wrapper) client [192.168.216.150]::call volumeCreate with ('GlusterData', ['st01.nieuwland.nl:/home/gluster-data', 'st02.nieuwland.nl:/home/gluster-data'], 2, 0, ['TCP']) {} MainProcess|Thread-644::DEBUG::2013-02-01 10:24:06,381::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume create GlusterData replica 2 transport TCP st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data --xml' (cwd None) MainProcess|Thread-644::DEBUG::2013-02-01 10:24:06,639::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-644::ERROR::2013-02-01 10:24:06,640::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 446, in volumeCreate xmltree = _execGlusterXml(command) File "/usr/share/vdsm/gluster/cli.py", line 89, in _execGlusterXml raise ge.GlusterXmlErrorException(err=out) GlusterXmlErrorException: XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
Thread-644::ERROR::2013-02-01 10:24:06,655::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 63, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result) GlusterXmlErrorException: XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
And on ST02 I get this error: MainProcess|Thread-93::DEBUG::2013-02-01 10:24:09,540::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script peer status --xml' (cwd None) MainProcess|Thread-93::DEBUG::2013-02-01 10:24:09,622::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-93::DEBUG::2013-02-01 10:24:09,624::BindingXMLRPC::920::vds::(wrapper) return hostsList with {'status': {'message': 'Done', 'code': 0}, 'hosts': [{'status': 'CONNECTED', 'hostname': '192.168.216.152', 'uuid': '15c7a739-6735-43f5-a1c0-3c7ff3469588'}, {'status': 'CONNECTED', 'hostname': '192.168.216.151', 'uuid': 'd53f4dcb-1116-4fbc-8d5b-e882175264aa'}]} Thread-94::DEBUG::2013-02-01 10:24:09,639::BindingXMLRPC::913::vds::(wrapper) client [192.168.216.150]::call volumesList with () {} MainProcess|Thread-94::DEBUG::2013-02-01 10:24:09,641::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None) MainProcess|Thread-94::DEBUG::2013-02-01 10:24:09,724::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-94::ERROR::2013-02-01 10:24:09,725::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 431, in volumeInfo raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)]) GlusterXmlErrorException: XML error error: <cliOutput><opRet>0</opRet><opErrno>0</opErrno><opErrstr /><volInfo><volumes><volume><name>GlusterData</name><id>e5cf9724-ecb0-41a3-abdf-0ea891e49f92</id><type>2</type><status>0</status><brickCount>2</brickCount><distCount>2</distCount><stripeCount>1</stripeCount><replicaCount>2</replicaCount><transport>0</transport><bricks><brick>st01.nieuwland.nl:/home/gluster-data</brick><brick>st02.nieuwland.nl:/home/gluster-data</brick></bricks><optCount>0</optCount><options /></volume><count>1</count></volumes></volInfo></cliOutput> Thread-94::ERROR::2013-02-01 10:24:09,741::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result) GlusterXmlErrorException: XML error
I have included a lot more logs in the attached zip. Running gluster volume info shows me on both server the correct data. The directory is created under /home, SELinux was enforcing but permissive gives the same error.
[root@st01 ~]# rpm -aq | grep vdsm vdsm-gluster-4.10.3-6.fc18.noarch vdsm-xmlrpc-4.10.3-6.fc18.noarch vdsm-cli-4.10.3-6.fc18.noarch vdsm-python-4.10.3-6.fc18.x86_64 vdsm-4.10.3-6.fc18.x86_64 [root@st01 ~]# rpm -aq | grep gluster glusterfs-fuse-3.3.1-8.fc18.x86_64 vdsm-gluster-4.10.3-6.fc18.noarch glusterfs-3.3.1-8.fc18.x86_64 glusterfs-server-3.3.1-8.fc18.x86_64
libvirt-0.10.2.2-3.fc18.x86_64 + deps same version
qemu-kvm-1.2.2-2.fc18.x86_64 + deps same version
Anything else I can provide do to fix this problem?
Joop -- irc: jvandewege @fosdem ;-)
Anything else?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------050800060002060103080409 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">Hi Joop,<br> <br> Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.<br> <br> Can you update the glusterfs to <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> <a href="http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/">http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/</a> and check it out?<br> <br> Thanks,<br> Kanagaraj<br> <br> On 02/01/2013 03:23 PM, Joop wrote:<br> </div> <blockquote cite="mid:66253ee0ba7972225f5d1d3b1531019a.squirrel@webmail.xs4all.nl" type="cite"> <pre wrap="">Yesterday was testday but not much fun :-( I had a reasonable working setup but for testday I decided to start from scratch and that ended rather soon. Installing and configuring engine was not a problem but I want a setup where I have two gluster hosts and two hosts as vmhosts. I added a second cluster using the webinterface set it to gluster storage and added two minimal installed Fedora 18 hosts where I setup static networking and verified that it worked. Adding the two hosts went OK but adding a Volume gives the following error on engine: 2013-02-01 09:32:39,084 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-4) [5ea886d] Running command: CreateGlusterVolumeCommand internal: false. Entities affected : ID: 8720debc-a184-4b61-9fa8-0fdf4d339b9a Type: VdsGroups 2013-02-01 09:32:39,117 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-4) [5ea886d] START, CreateGlusterVolumeVDSCommand(HostName = st02, HostId = e7b74172-2f95-43cb-83ff-11705ae24265), log id: 4270f4ef 2013-02-01 09:32:39,246 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc [mCode=4106, mMessage=XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> ] 2013-02-01 09:32:39,248 WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc [mCode=4106, mMessage=XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> ] 2013-02-01 09:32:39,249 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Failed in CreateGlusterVolumeVDS method 2013-02-01 09:32:39,250 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, error = XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> 2013-02-01 09:32:39,254 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--127.0.0.1-8702-4) [5ea886d] Command CreateGlusterVolumeVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, error = XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> 2013-02-01 09:32:39,255 INFO [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--127.0.0.1-8702-4) [5ea886d] FINISH, CreateGlusterVolumeVDSCommand, log id: 4270f4ef 2013-02-01 09:32:39,256 ERROR [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-4) [5ea886d] Command org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, error = XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> 2013-02-01 09:32:39,268 INFO [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] (ajp--127.0.0.1-8702-4) [5ea886d] Lock freed to object EngineLock [exclusiveLocks= key: 8720debc-a184-4b61-9fa8-0fdf4d339b9a value: GLUSTER , sharedLocks= ] 2013-02-01 09:32:40,902 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (QuartzScheduler_Worker-85) START, GlusterVolumesListVDSCommand(HostName = st02, HostId = e7b74172-2f95-43cb-83ff-11705ae24265), log id: 61cafb32 And on ST01 the following in vdsm.log Thread-644::DEBUG::2013-02-01 10:24:06,378::BindingXMLRPC::913::vds::(wrapper) client [192.168.216.150]::call volumeCreate with ('GlusterData', ['st01.nieuwland.nl:/home/gluster-data', 'st02.nieuwland.nl:/home/gluster-data'], 2, 0, ['TCP']) {} MainProcess|Thread-644::DEBUG::2013-02-01 10:24:06,381::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume create GlusterData replica 2 transport TCP st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data --xml' (cwd None) MainProcess|Thread-644::DEBUG::2013-02-01 10:24:06,639::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-644::ERROR::2013-02-01 10:24:06,640::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 446, in volumeCreate xmltree = _execGlusterXml(command) File "/usr/share/vdsm/gluster/cli.py", line 89, in _execGlusterXml raise ge.GlusterXmlErrorException(err=out) GlusterXmlErrorException: XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> Thread-644::ERROR::2013-02-01 10:24:06,655::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 63, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result) GlusterXmlErrorException: XML error error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <cliOutput><volCreate><count>2</count><bricks> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput> And on ST02 I get this error: MainProcess|Thread-93::DEBUG::2013-02-01 10:24:09,540::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script peer status --xml' (cwd None) MainProcess|Thread-93::DEBUG::2013-02-01 10:24:09,622::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-93::DEBUG::2013-02-01 10:24:09,624::BindingXMLRPC::920::vds::(wrapper) return hostsList with {'status': {'message': 'Done', 'code': 0}, 'hosts': [{'status': 'CONNECTED', 'hostname': '192.168.216.152', 'uuid': '15c7a739-6735-43f5-a1c0-3c7ff3469588'}, {'status': 'CONNECTED', 'hostname': '192.168.216.151', 'uuid': 'd53f4dcb-1116-4fbc-8d5b-e882175264aa'}]} Thread-94::DEBUG::2013-02-01 10:24:09,639::BindingXMLRPC::913::vds::(wrapper) client [192.168.216.150]::call volumesList with () {} MainProcess|Thread-94::DEBUG::2013-02-01 10:24:09,641::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster --mode=script volume info --xml' (cwd None) MainProcess|Thread-94::DEBUG::2013-02-01 10:24:09,724::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 MainProcess|Thread-94::ERROR::2013-02-01 10:24:09,725::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 431, in volumeInfo raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)]) GlusterXmlErrorException: XML error error: <cliOutput><opRet>0</opRet><opErrno>0</opErrno><opErrstr /><volInfo><volumes><volume><name>GlusterData</name><id>e5cf9724-ecb0-41a3-abdf-0ea891e49f92</id><type>2</type><status>0</status><brickCount>2</brickCount><distCount>2</distCount><stripeCount>1</stripeCount><replicaCount>2</replicaCount><transport>0</transport><bricks><brick>st01.nieuwland.nl:/home/gluster-data</brick><brick>st02.nieuwland.nl:/home/gluster-data</brick></bricks><optCount>0</optCount><options /></volume><count>1</count></volumes></volInfo></cliOutput> Thread-94::ERROR::2013-02-01 10:24:09,741::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)} File "/usr/share/vdsm/supervdsm.py", line 81, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeInfo File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod raise convert_to_error(kind, result) GlusterXmlErrorException: XML error I have included a lot more logs in the attached zip. Running gluster volume info shows me on both server the correct data. The directory is created under /home, SELinux was enforcing but permissive gives the same error. [root@st01 ~]# rpm -aq | grep vdsm vdsm-gluster-4.10.3-6.fc18.noarch vdsm-xmlrpc-4.10.3-6.fc18.noarch vdsm-cli-4.10.3-6.fc18.noarch vdsm-python-4.10.3-6.fc18.x86_64 vdsm-4.10.3-6.fc18.x86_64 [root@st01 ~]# rpm -aq | grep gluster glusterfs-fuse-3.3.1-8.fc18.x86_64 vdsm-gluster-4.10.3-6.fc18.noarch glusterfs-3.3.1-8.fc18.x86_64 glusterfs-server-3.3.1-8.fc18.x86_64 libvirt-0.10.2.2-3.fc18.x86_64 + deps same version qemu-kvm-1.2.2-2.fc18.x86_64 + deps same version Anything else I can provide do to fix this problem? Joop -- irc: jvandewege @fosdem ;-) Anything else? </pre> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------050800060002060103080409--

This is a multi-part message in MIME format. --------------030300040701090302000508 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.
Can you update the glusterfs to http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and check it out? How??
I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did. [gluster-nieuw] name=GlusterFS baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster enabled=1 My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4 Joop --------------030300040701090302000508 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <div class="moz-cite-prefix">On 1-2-2013 11:07, Kanagaraj wrote:<br> </div> <blockquote cite="mid:510B93DF.6040409@redhat.com" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">Hi Joop,<br> <br> Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.<br> <br> Can you update the glusterfs to <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> <a moz-do-not-send="true" href="http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/">http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/</a> and check it out?<br> </div> </blockquote> How??<br> <br> I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did. <br> <br> [gluster-nieuw]<br> name=GlusterFS<br> baseurl=<a class="moz-txt-link-freetext" href="http://bits.gluster.org/pub/gluster/glusterfs/stage/">http://bits.gluster.org/pub/gluster/glusterfs/stage/</a><br> gpgcheck=0<br> gpgkey=<a class="moz-txt-link-freetext" href="file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster">file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster</a><br> enabled=1<br> <br> My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4<br> <br> Joop<br> <br> <br> </body> </html> --------------030300040701090302000508--

This is a multi-part message in MIME format. --------------070003060901030000060808 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.
Can you update the glusterfs to http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and check it out? How??
I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did.
[gluster-nieuw] name=GlusterFS baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster enabled=1
My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4
The problem is that, released glusterfs rpms in fedora repository are of the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. I think because of the "v" before 3.4, these are considered as lower version, and by default yum picks up the rpms from fedora repository. To work around this issue, you could try: yum --disablerepo="*" --enablerepo="gluster-nieuw" install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------070003060901030000060808 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 02/01/2013 05:13 PM, noc wrote:<br> </div> <blockquote cite="mid:510BAA7B.6030502@nieuwland.nl" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 1-2-2013 11:07, Kanagaraj wrote:<br> </div> <blockquote cite="mid:510B93DF.6040409@redhat.com" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">Hi Joop,<br> <br> Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.<br> <br> Can you update the glusterfs to <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> <a moz-do-not-send="true" href="http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/">http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/</a> and check it out?<br> </div> </blockquote> How??<br> <br> I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did. <br> <br> [gluster-nieuw]<br> name=GlusterFS<br> baseurl=<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://bits.gluster.org/pub/gluster/glusterfs/stage/">http://bits.gluster.org/pub/gluster/glusterfs/stage/</a><br> gpgcheck=0<br> gpgkey=<a moz-do-not-send="true" class="moz-txt-link-freetext" href="file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster">file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster</a><br> enabled=1<br> <br> My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4<br> </blockquote> <br> The problem is that, released glusterfs rpms in fedora repository are of the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. I think because of the "v" before 3.4, these are considered as lower version, and by default yum picks up the rpms from fedora repository. <br> <br> To work around this issue, you could try:<br> <br> yum --disablerepo="*" --enablerepo="gluster-nieuw" install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server<br> <br> <blockquote cite="mid:510BAA7B.6030502@nieuwland.nl" type="cite"> <br> Joop<br> <br> <br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------070003060901030000060808--

This is a multi-part message in MIME format. --------------080607080401020300010100 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Shireesh Anjal wrote:
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.
Can you update the glusterfs to http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and check it out? How??
I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did.
[gluster-nieuw] name=GlusterFS baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster enabled=1
My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4
The problem is that, released glusterfs rpms in fedora repository are of the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. I think because of the "v" before 3.4, these are considered as lower version, and by default yum picks up the rpms from fedora repository.
To work around this issue, you could try:
yum --disablerepo="*" --enablerepo="gluster-nieuw" install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server
[root@st01 ~]# yum --disablerepo="*" --enablerepo="gluster-nieuw" install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server Loaded plugins: langpacks, presto, refresh-packagekit Package matching glusterfs-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update. Package matching glusterfs-fuse-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update. Package matching glusterfs-server-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update. Resolving Dependencies --> Running transaction check ---> Package glusterfs-geo-replication.x86_64 0:v3.4.0qa7-1.el6 will be installed --> Processing Dependency: glusterfs = v3.4.0qa7-1.el6 for package: glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64 --> Finished Dependency Resolution Error: Package: glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64 (gluster-nieuw) Requires: glusterfs = v3.4.0qa7-1.el6 Installed: glusterfs-3.3.1-8.fc18.x86_64 (@updates) glusterfs = 3.3.1-8.fc18 Available: glusterfs-v3.4.0qa7-1.el6.x86_64 (gluster-nieuw) glusterfs = v3.4.0qa7-1.el6 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest [root@st01 ~]# yum --disablerepo="*" --enablerepo="gluster-nieuw" install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server --skip-broken Loaded plugins: langpacks, presto, refresh-packagekit Package matching glusterfs-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update. Package matching glusterfs-fuse-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update. Package matching glusterfs-server-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update. Resolving Dependencies --> Running transaction check ---> Package glusterfs-geo-replication.x86_64 0:v3.4.0qa7-1.el6 will be installed --> Processing Dependency: glusterfs = v3.4.0qa7-1.el6 for package: glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64 gluster-nieuw/filelists | 7.2 kB 00:00:00 Packages skipped because of dependency problems: glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64 from gluster-nieuw Last post, probably, until sunday-evening/monday morning, off to fosdem ;-) Joop --------------080607080401020300010100 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type"> <title></title> </head> <body bgcolor="#ffffff" text="#000000"> Shireesh Anjal wrote: <blockquote cite="mid:510BB587.9000006@redhat.com" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 02/01/2013 05:13 PM, noc wrote:<br> </div> <blockquote cite="mid:510BAA7B.6030502@nieuwland.nl" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 1-2-2013 11:07, Kanagaraj wrote:<br> </div> <blockquote cite="mid:510B93DF.6040409@redhat.com" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">Hi Joop,<br> <br> Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.<br> <br> Can you update the glusterfs to <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> <a moz-do-not-send="true" href="http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/">http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/</a> and check it out?<br> </div> </blockquote> How??<br> <br> I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did. <br> <br> [gluster-nieuw]<br> name=GlusterFS<br> baseurl=<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://bits.gluster.org/pub/gluster/glusterfs/stage/">http://bits.gluster.org/pub/gluster/glusterfs/stage/</a><br> gpgcheck=0<br> gpgkey=<a moz-do-not-send="true" class="moz-txt-link-freetext" href="file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster">file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster</a><br> enabled=1<br> <br> My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4<br> </blockquote> <br> The problem is that, released glusterfs rpms in fedora repository are of the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. I think because of the "v" before 3.4, these are considered as lower version, and by default yum picks up the rpms from fedora repository. <br> <br> To work around this issue, you could try:<br> <br> yum --disablerepo="*" --enablerepo="gluster-nieuw" install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server<br> <br> </blockquote> [root@st01 ~]# yum --disablerepo="*" --enablerepo="gluster-nieuw" install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server<br> <br> Loaded plugins: langpacks, presto, refresh-packagekit<br> Package matching glusterfs-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update.<br> Package matching glusterfs-fuse-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update.<br> Package matching glusterfs-server-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update.<br> Resolving Dependencies<br> --> Running transaction check<br> ---> Package glusterfs-geo-replication.x86_64 0:v3.4.0qa7-1.el6 will be installed<br> --> Processing Dependency: glusterfs = v3.4.0qa7-1.el6 for package: glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64<br> --> Finished Dependency Resolution<br> Error: Package: glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64 (gluster-nieuw)<br> Requires: glusterfs = v3.4.0qa7-1.el6<br> Installed: glusterfs-3.3.1-8.fc18.x86_64 (@updates)<br> glusterfs = 3.3.1-8.fc18<br> Available: glusterfs-v3.4.0qa7-1.el6.x86_64 (gluster-nieuw)<br> glusterfs = v3.4.0qa7-1.el6<br> You could try using --skip-broken to work around the problem<br> You could try running: rpm -Va --nofiles --nodigest<br> <br> <br> [root@st01 ~]# yum --disablerepo="*" --enablerepo="gluster-nieuw" install glusterfs glusterfs-fuse glusterfs-geo-replication glusterfs-server --skip-broken<br> <br> Loaded plugins: langpacks, presto, refresh-packagekit<br> Package matching glusterfs-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update.<br> Package matching glusterfs-fuse-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update.<br> Package matching glusterfs-server-v3.4.0qa7-1.el6.x86_64 already installed. Checking for update.<br> Resolving Dependencies<br> --> Running transaction check<br> ---> Package glusterfs-geo-replication.x86_64 0:v3.4.0qa7-1.el6 will be installed<br> --> Processing Dependency: glusterfs = v3.4.0qa7-1.el6 for package: glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64<br> gluster-nieuw/filelists | 7.2 kB 00:00:00<br> <br> Packages skipped because of dependency problems:<br> glusterfs-geo-replication-v3.4.0qa7-1.el6.x86_64 from gluster-nieuw<br> <br> Last post, probably, until sunday-evening/monday morning, off to fosdem ;-)<br> <br> Joop<br> <br> <br> </body> </html> --------------080607080401020300010100--

This is a multi-part message in MIME format. --------------000501020106030709030903 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Shireesh Anjal wrote:
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.
Can you update the glusterfs to http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and check it out? How??
I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did.
[gluster-nieuw] name=GlusterFS baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster enabled=1
My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4
The problem is that, released glusterfs rpms in fedora repository are of the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. I think because of the "v" before 3.4, these are considered as lower version, and by default yum picks up the rpms from fedora repository.
The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just had a look that folder and repo doesn't have the 'v' in front of it. Is there someone on this list that has the 'powers' to change that ?? Joop --------------000501020106030709030903 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#ffffff" text="#000000"> Shireesh Anjal wrote: <blockquote cite="mid:510BB587.9000006@redhat.com" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 02/01/2013 05:13 PM, noc wrote:<br> </div> <blockquote cite="mid:510BAA7B.6030502@nieuwland.nl" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 1-2-2013 11:07, Kanagaraj wrote:<br> </div> <blockquote cite="mid:510B93DF.6040409@redhat.com" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">Hi Joop,<br> <br> Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.<br> <br> Can you update the glusterfs to <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> <a moz-do-not-send="true" href="http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/">http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/</a> and check it out?<br> </div> </blockquote> How??<br> <br> I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did. <br> <br> [gluster-nieuw]<br> name=GlusterFS<br> baseurl=<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://bits.gluster.org/pub/gluster/glusterfs/stage/">http://bits.gluster.org/pub/gluster/glusterfs/stage/</a><br> gpgcheck=0<br> gpgkey=<a moz-do-not-send="true" class="moz-txt-link-freetext" href="file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster">file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster</a><br> enabled=1<br> <br> My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4<br> </blockquote> <br> The problem is that, released glusterfs rpms in fedora repository are of the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. I think because of the "v" before 3.4, these are considered as lower version, and by default yum picks up the rpms from fedora repository. <br> <br> </blockquote> The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just had a look that folder and repo doesn't have the 'v' in front of it.<br> <br> Is there someone on this list that has the 'powers' to change that ??<br> <br> Joop<br> <br> </body> </html> --------------000501020106030709030903--

This is a multi-part message in MIME format. --------------030200010408070100030002 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 02/01/2013 06:47 PM, Joop wrote:
Shireesh Anjal wrote:
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.
Can you update the glusterfs to http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and check it out? How??
I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did.
[gluster-nieuw] name=GlusterFS baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster enabled=1
My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4
The problem is that, released glusterfs rpms in fedora repository are of the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. I think because of the "v" before 3.4, these are considered as lower version, and by default yum picks up the rpms from fedora repository.
The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just had a look that folder and repo doesn't have the 'v' in front of it.
Thats correct. [kanagaraj@localhost ~]$ rpmdev-vercmp glusterfs-3.3.1-8.fc18.x86_64 glusterfs-v3.4.0qa7-1.el6.x86_64 glusterfs-3.3.1-8.fc18.x86_64 > glusterfs-v3.4.0qa7-1.el6.x86_64
Is there someone on this list that has the 'powers' to change that ??
[Adding Vijay]
Joop
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------030200010408070100030002 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 02/01/2013 06:47 PM, Joop wrote:<br> </div> <blockquote cite="mid:510BC058.6080905@xs4all.nl" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> Shireesh Anjal wrote: <blockquote cite="mid:510BB587.9000006@redhat.com" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 02/01/2013 05:13 PM, noc wrote:<br> </div> <blockquote cite="mid:510BAA7B.6030502@nieuwland.nl" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 1-2-2013 11:07, Kanagaraj wrote:<br> </div> <blockquote cite="mid:510B93DF.6040409@redhat.com" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">Hi Joop,<br> <br> Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.<br> <br> Can you update the glusterfs to <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"> <a moz-do-not-send="true" href="http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/">http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/</a> and check it out?<br> </div> </blockquote> How??<br> <br> I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did. <br> <br> [gluster-nieuw]<br> name=GlusterFS<br> baseurl=<a moz-do-not-send="true" class="moz-txt-link-freetext" href="http://bits.gluster.org/pub/gluster/glusterfs/stage/">http://bits.gluster.org/pub/gluster/glusterfs/stage/</a><br> gpgcheck=0<br> gpgkey=<a moz-do-not-send="true" class="moz-txt-link-freetext" href="file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster">file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster</a><br> enabled=1<br> <br> My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4<br> </blockquote> <br> The problem is that, released glusterfs rpms in fedora repository are of the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. I think because of the "v" before 3.4, these are considered as lower version, and by default yum picks up the rpms from fedora repository. <br> <br> </blockquote> The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just had a look that folder and repo doesn't have the 'v' in front of it.<br> <br> </blockquote> Thats correct. <br> <br> [kanagaraj@localhost ~]$ rpmdev-vercmp glusterfs-3.3.1-8.fc18.x86_64 glusterfs-v3.4.0qa7-1.el6.x86_64 <br> glusterfs-3.3.1-8.fc18.x86_64 > glusterfs-v3.4.0qa7-1.el6.x86_64<br> <br> <blockquote cite="mid:510BC058.6080905@xs4all.nl" type="cite"> Is there someone on this list that has the 'powers' to change that ??<br> <br> </blockquote> <br> [Adding Vijay]<br> <br> <blockquote cite="mid:510BC058.6080905@xs4all.nl" type="cite"> Joop<br> <br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------030200010408070100030002--

On 02/01/2013 07:38 PM, Kanagaraj wrote:
On 02/01/2013 06:47 PM, Joop wrote:
Shireesh Anjal wrote:
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.
Can you update the glusterfs to http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and check it out? How??
I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did.
[gluster-nieuw] name=GlusterFS baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster enabled=1
My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4
The problem is that, released glusterfs rpms in fedora repository are of the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. I think because of the "v" before 3.4, these are considered as lower version, and by default yum picks up the rpms from fedora repository.
The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just had a look that folder and repo doesn't have the 'v' in front of it.
Thats correct.
[kanagaraj@localhost ~]$ rpmdev-vercmp glusterfs-3.3.1-8.fc18.x86_64 glusterfs-v3.4.0qa7-1.el6.x86_64 glusterfs-3.3.1-8.fc18.x86_64 > glusterfs-v3.4.0qa7-1.el6.x86_64
Is there someone on this list that has the 'powers' to change that ??
[Adding Vijay]
3.4.0qa8 is available now. Can you please check with that? Thanks, Vijay

On 02/01/2013 07:38 PM, Kanagaraj wrote:
On 02/01/2013 06:47 PM, Joop wrote:
Shireesh Anjal wrote:
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are using. vdsm could not parse the output from gluster.
Can you update the glusterfs to http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and check it out? How??
I tried adding this repo but but yum says that there are no updates available, atleast yesterday it did.
[gluster-nieuw] name=GlusterFS baseurl=http://bits.gluster.org/pub/gluster/glusterfs/stage/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Gluster enabled=1
My yumfoo isn't that good so I don't know how to force it. Besides I tried through yum localinstall but it will revert when yum update is run. It looks like it thinks that 3.3.1 is newer than 3.4
The problem is that, released glusterfs rpms in fedora repository are of the form 3.3.1-8, whereas the ones from above QA release are v3.4.0qa7. I think because of the "v" before 3.4, these are considered as lower version, and by default yum picks up the rpms from fedora repository.
The 'v' is 99.9% the culprit. I had 3.4.0qa6 before I wiped and just had a look that folder and repo doesn't have the 'v' in front of it.
Thats correct.
[kanagaraj@localhost ~]$ rpmdev-vercmp glusterfs-3.3.1-8.fc18.x86_64 glusterfs-v3.4.0qa7-1.el6.x86_64 glusterfs-3.3.1-8.fc18.x86_64 > glusterfs-v3.4.0qa7-1.el6.x86_64
Is there someone on this list that has the 'powers' to change that ??
[Adding Vijay]
3.4.0qa8 is available now. Can you please check with that?
Thanks, Vijay I can report back that 3.4.0qa8 does work. oVirt also picked up on the volumes that I created but didn't show up in the interface. Could start
Vijay Bellur wrote: them an will test if they are fully usable. Thanks for the quick respons. Joop

On 3-2-2013 19:59, Joop wrote:
I can report back that 3.4.0qa8 does work. oVirt also picked up on the volumes that I created but didn't show up in the interface. Could start them an will test if they are fully usable.
I added my export domain and re-imported the VMs and all is well despite that the original creation of the glustervolumes failed because of gluster version 3.3.1. So lets TEST ;-) Joop
participants (5)
-
Joop
-
Kanagaraj
-
noc
-
Shireesh Anjal
-
Vijay Bellur