[ovirt-users] Regression in Gluster volume code?

Sahina Bose sabose at redhat.com
Wed Dec 16 16:06:21 UTC 2015



On 12/16/2015 09:20 PM, Joop wrote:
> On 15-12-2015 12:31, Sahina Bose wrote:
>>
>> On 12/15/2015 03:26 PM, Joop wrote:
>>> On 14-12-2015 12:00, Joop wrote:
>>>> I have reinstalled my test environment have come across an old error,
>>>> see BZ 988299, Bad volume specification {u'index': 0,.
>>>>
>>>> At the end of that BZ there is mentioning of a problem with '_' in the
>>>> name of the volume and a patch is mentioned but the code has since been
>>>> change quite a bit and I can't find if that still applies. It look like
>>>> it doesn't because I have a gluster volume with the name
>>>> gv_ovirt_data01
>>>> and it look like it gets translated to gv__ovirt__data01 and then I
>>>> can't start any VMs :-(
>>>> Weird thing, I CAN import VMs from the export domain to this gluster
>>>> domain.
>>>>
>>> I have just done the following on 2 servers which also hold the volumes
>>> with '_' in it:
>>>
>>> mkdir -p /gluster/br-ovirt-data02
>>>
>>> ssm -f create -p vg_`hostname -s` --size 10G --name lv-ovirt-data02
>>> --fstype xfs /gluster/br-ovirt-data02
>>>
>>> echo /dev/mapper/vg_`hostname -s`-lv-ovirt-data02
>>> /gluster/br-ovirt-data02        xfs     defaults        1 2 >>/etc/fstab
>>>
>>> semanage fcontext -a -t glusterd_brick_t /gluster/br-ovirt-data02
>>>
>>> restorecon -Rv /gluster/br-ovirt-data02
>>>
>>> mkdir /gluster/br-ovirt-data02/gl-ovirt-data02
>>>
>>> chown -R 36:36 /gluster/
>>>
>>> Added a replicated volume on top of the above, started it, added a
>>> Storage Domain using that volume, moved a disk to it, and started the
>>> VM, works! :-)
>>>
>>> Should I open a BZ or does someone know of an existing one?
>> Could you open one?
>>
> I tried but it looks like the email from BZ isn't arriving at my mailbox :-(
> I had to renew my password and haven't gotten the link, yet. Creating a
> new account with a different email domain didn't work either so I'm
> gonna summarize what I did today.
>
> It looks like that something goes wrong in
> vdsm/storage/glusterVolume.py. Volname in  getVmVolumeInfo ends up with
> a volumename with double underscores in it, then
> svdsmProxy.glusterVolumeInfo is called which in the end calls a cli
> script, by supervdsmd, which returns an empty xml document because there
> is no such volume with double underscores. Running the command which is
> logged in supervdsm.log confirms this too. Reducing the volname to have
> only single underscores returns a correct xml object.
> My guess is that this: rpath =
> sdCache.produce(self.sdUUID).getRemotePath() probably should return what
> the real name is that has been used to connect to the storage. In my case:
> Real path entered during setup: st01:gv_ovirt_data01
> What's used: st01:gv__ovirt__data01
> Just doing a 's/__/_/' is a bit shortsighted but would work for me since
> I don't use '/' when entering the storage connection above (My
> perception is that if you want the NFS export of gluster you use the /
> else if you want the glusterfs protocol you don't. There is a line of
> code in vdsm which replaces one underscore with two AND replaces a /
> with an underscore, going back is ofcourse then impossible if you don't
> store the original).
>
> I hope one of the devs is willing to create the BZ with this info and I
> hope has a solution to this problem.

https://bugzilla.redhat.com/show_bug.cgi?id=1292173

>
> Regards,
>
> Joop
>




More information about the Users mailing list