[Users] Testday aftermath
Kanagaraj
kmayilsa at redhat.com
Fri Feb 1 10:07:27 UTC 2013
Hi Joop,
Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from gluster.
Can you update the glusterfs to
http://bits.gluster.org/pub/gluster/glusterfs/v3.4.0qa7/x86_64/ and
check it out?
Thanks,
Kanagaraj
On 02/01/2013 03:23 PM, Joop wrote:
> Yesterday was testday but not much fun :-(
>
> I had a reasonable working setup but for testday I decided to start from
> scratch and that ended rather soon. Installing and configuring engine was
> not a problem but I want a setup where I have two gluster hosts and two
> hosts as vmhosts.
> I added a second cluster using the webinterface set it to gluster storage
> and added two minimal installed Fedora 18 hosts where I setup static
> networking and verified that it worked.
> Adding the two hosts went OK but adding a Volume gives the following error
> on engine:
>
> 2013-02-01 09:32:39,084 INFO
> [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
> (ajp--127.0.0.1-8702-4) [5ea886d] Running command:
> CreateGlusterVolumeCommand internal: false. Entities affected : ID:
> 8720debc-a184-4b61-9fa8-0fdf4d339b9a Type: VdsGroups
> 2013-02-01 09:32:39,117 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
> (ajp--127.0.0.1-8702-4) [5ea886d] START,
> CreateGlusterVolumeVDSCommand(HostName = st02, HostId =
> e7b74172-2f95-43cb-83ff-11705ae24265), log id: 4270f4ef
> 2013-02-01 09:32:39,246 WARN
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc
> [mCode=4106, mMessage=XML error
> error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> <cliOutput><volCreate><count>2</count><bricks>
> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
> </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
> ]
> 2013-02-01 09:32:39,248 WARN
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc
> [mCode=4106, mMessage=XML error
> error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> <cliOutput><volCreate><count>2</count><bricks>
> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
> </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
> ]
> 2013-02-01 09:32:39,249 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (ajp--127.0.0.1-8702-4) [5ea886d] Failed in CreateGlusterVolumeVDS method
> 2013-02-01 09:32:39,250 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (ajp--127.0.0.1-8702-4) [5ea886d] Error code unexpected and error message
> VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
> error = XML error
> error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> <cliOutput><volCreate><count>2</count><bricks>
> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
> </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
>
> 2013-02-01 09:32:39,254 ERROR
> [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--127.0.0.1-8702-4)
> [5ea886d] Command CreateGlusterVolumeVDS execution failed. Exception:
> VDSErrorException: VDSGenericException: VDSErrorException: Failed to
> CreateGlusterVolumeVDS, error = XML error
> error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> <cliOutput><volCreate><count>2</count><bricks>
> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
> </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
>
> 2013-02-01 09:32:39,255 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
> (ajp--127.0.0.1-8702-4) [5ea886d] FINISH, CreateGlusterVolumeVDSCommand,
> log id: 4270f4ef
> 2013-02-01 09:32:39,256 ERROR
> [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
> (ajp--127.0.0.1-8702-4) [5ea886d] Command
> org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll
> exception. With error message VdcBLLException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
> error = XML error
> error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> <cliOutput><volCreate><count>2</count><bricks>
> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
> </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
>
> 2013-02-01 09:32:39,268 INFO
> [org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
> (ajp--127.0.0.1-8702-4) [5ea886d] Lock freed to object EngineLock
> [exclusiveLocks= key: 8720debc-a184-4b61-9fa8-0fdf4d339b9a value: GLUSTER
> , sharedLocks= ]
> 2013-02-01 09:32:40,902 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (QuartzScheduler_Worker-85) START, GlusterVolumesListVDSCommand(HostName =
> st02, HostId = e7b74172-2f95-43cb-83ff-11705ae24265), log id: 61cafb32
>
> And on ST01 the following in vdsm.log
> Thread-644::DEBUG::2013-02-01
> 10:24:06,378::BindingXMLRPC::913::vds::(wrapper) client
> [192.168.216.150]::call volumeCreate with ('GlusterData',
> ['st01.nieuwland.nl:/home/gluster-data',
> 'st02.nieuwland.nl:/home/gluster-data'], 2, 0, ['TCP']) {}
> MainProcess|Thread-644::DEBUG::2013-02-01
> 10:24:06,381::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster
> --mode=script volume create GlusterData replica 2 transport TCP
> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
> --xml' (cwd None)
> MainProcess|Thread-644::DEBUG::2013-02-01
> 10:24:06,639::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
> ''; <rc> = 0
> MainProcess|Thread-644::ERROR::2013-02-01
> 10:24:06,640::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
> Error in wrapper
> Traceback (most recent call last):
> File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper
> return func(*args, **kwargs)
> File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper
> return func(*args, **kwargs)
> File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper
> return func(*args, **kwargs)
> File "/usr/share/vdsm/gluster/cli.py", line 446, in volumeCreate
> xmltree = _execGlusterXml(command)
> File "/usr/share/vdsm/gluster/cli.py", line 89, in _execGlusterXml
> raise ge.GlusterXmlErrorException(err=out)
> GlusterXmlErrorException: XML error
> error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> <cliOutput><volCreate><count>2</count><bricks>
> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
> </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
>
> Thread-644::ERROR::2013-02-01
> 10:24:06,655::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured
> Traceback (most recent call last):
> File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper
> rv = func(*args, **kwargs)
> File "/usr/share/vdsm/gluster/api.py", line 63, in volumeCreate
> transportList)
> File "/usr/share/vdsm/supervdsm.py", line 81, in __call__
> return callMethod()
> File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda>
> **kwargs)
> File "<string>", line 2, in glusterVolumeCreate
> File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod
> raise convert_to_error(kind, result)
> GlusterXmlErrorException: XML error
> error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> <cliOutput><volCreate><count>2</count><bricks>
> st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
> </bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
>
> And on ST02 I get this error:
> MainProcess|Thread-93::DEBUG::2013-02-01
> 10:24:09,540::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster
> --mode=script peer status --xml' (cwd None)
> MainProcess|Thread-93::DEBUG::2013-02-01
> 10:24:09,622::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
> ''; <rc> = 0
> Thread-93::DEBUG::2013-02-01
> 10:24:09,624::BindingXMLRPC::920::vds::(wrapper) return hostsList with
> {'status': {'message': 'Done', 'code': 0}, 'hosts': [{'status':
> 'CONNECTED', 'hostname': '192.168.216.152', 'uuid':
> '15c7a739-6735-43f5-a1c0-3c7ff3469588'}, {'status': 'CONNECTED',
> 'hostname': '192.168.216.151', 'uuid':
> 'd53f4dcb-1116-4fbc-8d5b-e882175264aa'}]}
> Thread-94::DEBUG::2013-02-01
> 10:24:09,639::BindingXMLRPC::913::vds::(wrapper) client
> [192.168.216.150]::call volumesList with () {}
> MainProcess|Thread-94::DEBUG::2013-02-01
> 10:24:09,641::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster
> --mode=script volume info --xml' (cwd None)
> MainProcess|Thread-94::DEBUG::2013-02-01
> 10:24:09,724::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
> ''; <rc> = 0
> MainProcess|Thread-94::ERROR::2013-02-01
> 10:24:09,725::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
> Error in wrapper
> Traceback (most recent call last):
> File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper
> return func(*args, **kwargs)
> File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper
> return func(*args, **kwargs)
> File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper
> return func(*args, **kwargs)
> File "/usr/share/vdsm/gluster/cli.py", line 431, in volumeInfo
> raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
> GlusterXmlErrorException: XML error
> error: <cliOutput><opRet>0</opRet><opErrno>0</opErrno><opErrstr
> /><volInfo><volumes><volume><name>GlusterData</name><id>e5cf9724-ecb0-41a3-abdf-0ea891e49f92</id><type>2</type><status>0</status><brickCount>2</brickCount><distCount>2</distCount><stripeCount>1</stripeCount><replicaCount>2</replicaCount><transport>0</transport><bricks><brick>st01.nieuwland.nl:/home/gluster-data</brick><brick>st02.nieuwland.nl:/home/gluster-data</brick></bricks><optCount>0</optCount><options
> /></volume><count>1</count></volumes></volInfo></cliOutput>
> Thread-94::ERROR::2013-02-01
> 10:24:09,741::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured
> Traceback (most recent call last):
> File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper
> rv = func(*args, **kwargs)
> File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList
> return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
> File "/usr/share/vdsm/supervdsm.py", line 81, in __call__
> return callMethod()
> File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda>
> **kwargs)
> File "<string>", line 2, in glusterVolumeInfo
> File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod
> raise convert_to_error(kind, result)
> GlusterXmlErrorException: XML error
>
> I have included a lot more logs in the attached zip.
> Running gluster volume info shows me on both server the correct data.
> The directory is created under /home, SELinux was enforcing but permissive
> gives the same error.
>
> [root at st01 ~]# rpm -aq | grep vdsm
> vdsm-gluster-4.10.3-6.fc18.noarch
> vdsm-xmlrpc-4.10.3-6.fc18.noarch
> vdsm-cli-4.10.3-6.fc18.noarch
> vdsm-python-4.10.3-6.fc18.x86_64
> vdsm-4.10.3-6.fc18.x86_64
> [root at st01 ~]# rpm -aq | grep gluster
> glusterfs-fuse-3.3.1-8.fc18.x86_64
> vdsm-gluster-4.10.3-6.fc18.noarch
> glusterfs-3.3.1-8.fc18.x86_64
> glusterfs-server-3.3.1-8.fc18.x86_64
>
> libvirt-0.10.2.2-3.fc18.x86_64 + deps same version
>
> qemu-kvm-1.2.2-2.fc18.x86_64 + deps same version
>
> Anything else I can provide do to fix this problem?
>
> Joop
> --
> irc: jvandewege
> @fosdem ;-)
>
> Anything else?
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130201/1d35bf71/attachment-0001.html>
More information about the Users
mailing list