[Users] oVirt nightly-11-11 and gluster

Joop jvdwege at xs4all.nl
Wed Nov 14 13:05:06 UTC 2012


Joop wrote:
> Joop wrote:
>> Balamurugan Arumugam wrote:
>>> Hi,
>>>
>>> ----- Original Message -----
>>>   
>>>> From: "Joop" <jvdwege at xs4all.nl>
>>>> To: users at ovirt.org
>>>> Sent: Monday, November 12, 2012 5:29:32 PM
>>>> Subject: [Users] oVirt nightly-11-11 and gluster
>>>>
>>>> Came across the following error in engine.log after creating a new
>>>> gluster
>>>> volume.
>>>> 2012-11-12 12:39:35,264 INFO
>>>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>>>> (QuartzScheduler_Worker-94) START,
>>>> GlusterVolumesListVDSCommand(HostName =
>>>> st01, HostId = 402f987e-2804-11e2-aa60-78e7d1f4ada5), log id:
>>>> 33d22dbf
>>>> 2012-11-12 12:39:35,376 ERROR
>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
>>>> (QuartzScheduler_Worker-94) Failed in GlusterVolumesListVDS method,
>>>> for
>>>> vds: st01; host: st01.nieuwland.nl
>>>> 2012-11-12 12:39:35,377 ERROR
>>>> [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
>>>> (QuartzScheduler_Worker-94) Command GlusterVolumesListVDS execution
>>>> failed. Exception: NumberFormatException: For input string: "1 x 2 =
>>>> 2"
>>>>     
>>>
>>>
>>> In case of REPLICATE and STRIPE volume type, we suppose to get integer value in brick count.  But I am seeing '1 x 2 = 2'.  This could be a recent change in 'gluster volume info' output.  Can you send me below details?
>>> 1. output of 'rpm -qa | grep glusterfs'
>>> 2. output of 'gluster volume info' command
>>>
>>> However we are in the process of using xml output http://gerrit.ovirt.org/#/c/7951/
>>>
>>>
>>>   
>> Found a post of Brian Vetter, 
>> http://www.mail-archive.com/users@ovirt.org/msg04135.html, I alse did 
>> the setsebool sanlock_use_nfs on and now I can atleast access the a 
>> distributed volume.
>> Next try will be a replicated volume.
> To follow up on my post. Using a replicated volume works too ;-)
Wat didn't work was starting a VM until I used setenforce 0
Here are my installed vdsm packages. They are pushed by the engine.
[root at st02 vdsm]# rpm -aq | grep vdsm
vdsm-python-4.10.0-10.fc17.x86_64
vdsm-gluster-4.10.0-10.fc17.noarch
vdsm-cli-4.10.0-10.fc17.noarch
vdsm-xmlrpc-4.10.0-10.fc17.noarch
vdsm-4.10.0-10.fc17.x86_64

Joop



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20121114/eff2694c/attachment-0001.html>


More information about the Users mailing list