[ovirt-users] Persisting glusterfs configs on an oVirt node
Ryan Barry
rbarry at redhat.com
Wed May 28 15:20:20 UTC 2014
On 05/28/2014 10:23 AM, Fabian Deutsch wrote:
> Am Mittwoch, den 28.05.2014, 14:22 +0000 schrieb Simon Barrett:
>> I just wasn't sure if I was missing something in the configuration to enable this.
>>
>> I'll stick with the workarounds I have for now and see how it goes.
>>
>> Thanks again.
>
> You are welcome! :)
>
>> Simon
>>
>> -----Original Message-----
>> From: Fabian Deutsch [mailto:fdeutsch at redhat.com]
>> Sent: 28 May 2014 15:20
>> To: Simon Barrett
>> Cc: Ryan Barry; Doron Fediuck; users at ovirt.org
>> Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node
>>
>> Am Mittwoch, den 28.05.2014, 14:14 +0000 schrieb Simon Barrett:
>>> I did a "persist /var/lib/glusterd" and things are looking better. The gluster config is now still in place after a reboot.
>>>
>>> As a workaround to getting glusterd running on boot, I added "service glusterd start" to /etc/rc.local and ran persist /etc/rc.local. It appears to be working but feels like a bit of a hack.
>>>
>>> Does anyone have any other suggestions as to the correct way to do this?
>>
>> Hey Simon,
>>
>> I was also investigating both the steps you did. And was also about to recommend them :) They are more a workaround.
>>
>> We basically need some patches to change the defaults on Node, to let gluster work out of the box.
>>
>> This would include persisting the correct paths and enabling glusterd if enabled.
>>
>> - fabian
>>
>>> Thanks,
>>>
>>> Simon
>>>
>>> -----Original Message-----
>>> From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On
>>> Behalf Of Simon Barrett
>>> Sent: 28 May 2014 14:12
>>> To: Ryan Barry; Fabian Deutsch; Doron Fediuck
>>> Cc: users at ovirt.org
>>> Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt
>>> node
>>>
>>> Thanks for the replies.
>>>
>>> I cannot get glusterd to start on boot and I lose all gluster config every reboot.
>>>
>>> The following shows what I did on the node to start glusterd, create a volume etc, followed by the state of the node after a reboot.
>>>
>>>
>>> [root at ovirt_node]# service glusterd status glusterd is stopped
>>>
>>> [root at ovirt_node]# chkconfig --list glusterd
>>> glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
>>>
>>> [root at ovirt_node]# service glusterd start Starting glusterd:[ OK ]
>>>
>>> gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore
>>> volume create: vmstore: success: please start the volume to access
>>> data
>>>
>>> gluster> vol start vmstore
>>> volume start: vmstore: success
>>>
>>> gluster> vol info
>>> Volume Name: vmstore
>>> Type: Distribute
>>> Volume ID: 5bd01043-1352-4014-88ca-e632e264d088
>>> Status: Started
>>> Number of Bricks: 1
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 10.22.8.46:/data/glusterfs/vmstore
>>>
>>> [root at ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks node_state.info trusted-vmstore-fuse.vol cksum
>>> rbstate
>>> vmstore.10.22.8.46.data-glusterfs-vmstore.vol
>>> info
>>> run
>>> vmstore-fuse.vol
>>>
>>> [root at ovirt_node]# grep gluster /etc/rwtab.d/*
>>> /etc/rwtab.d/ovirt:files /var/lib/glusterd
>>>
>>> [root at ovirt_node]# chkconfig glusterd on [root at ovirt_node]# chkconfig --list glusterd
>>> glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
>>>
>>>
>>> ####################################
>>> I then reboot the node and see the following:
>>> ####################################
>>>
>>> [root at ovirt_node]# service glusterd status glusterd is stopped
>>>
>>> [root at ovirt_node]# chkconfig --list glusterd
>>> glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
>>>
>>> [root at ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0
I believe that we intentionally do not start glusterd, since glusterfsd
is all that's required for the engine to manage volumes, but I could be
mis-remembering this, and I don't have any real arguments to starting
glusterd at boot unless somebody speaks up against it.
>>>
>>> No more gluster volume configuration files.
>>>
>>> I've taken a look through http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Persisting_changes but I'm unsure what needs to be done to persist this configuration.
>>>
>>> To get glusterd to start on boot, do I need to manually persist /etc/rc* files?
>>>
>>> I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is this a list of the files/dirs that should be persisted automatically? If so, is it recursive and should it include everything in /var/lib/glusterd/vols?
rwtab is a mechanism from readonly-root, which walks through the
filesystem and says "copy these files to
/var/lib/stateless/writable/${path} and bind mount them back in their
original location. So you can write files there, but they don't survive
reboots on Node.
Since Node is booting from the same ramdisk every time (essentially the
ISO copied to the hard drive), this mechanism doesn't really work for
us, and persistence is a different mechanism entirely.
>>>
>>> TIA for any help with this.
>>>
>>> Simon
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: Ryan Barry [mailto:rbarry at redhat.com]
>>> Sent: 27 May 2014 14:01
>>> To: Fabian Deutsch; Doron Fediuck; Simon Barrett
>>> Cc: users at ovirt.org
>>> Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt
>>> node
>>>
>>> On 05/26/2014 04:14 AM, Fabian Deutsch wrote:
>>>> Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck:
>>>>>
>>>>> ----- Original Message -----
>>>>>> From: "Simon Barrett" <Simon.Barrett at tradingscreen.com>
>>>>>> To: users at ovirt.org
>>>>>> Sent: Friday, May 23, 2014 11:29:39 AM
>>>>>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt
>>>>>> node
>>>>>>
>>>>>>
>>>>>>
>>>>>> I am working through the setup of oVirt node for a 3.4.1 deployment.
>>>>>>
>>>>>>
>>>>>>
>>>>>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor
>>>>>> release 3.0.4
>>>>>> (1.0.201401291204.el6) and created a storage domain. All was
>>>>>> working OK until I rebooted the node and found that the glusterfs
>>>>>> configuration had not been retained.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Is there something I should be doing to persist any glusterfs
>>>>>> configuration so it survives a node reboot?
>>>>>>
>>>>>>
>>>>>>
>>>>>> Many thanks,
>>>>>>
>>>>>>
>>>>>>
>>>>>> Simon
>>>>>>
>>>>>
>>>>> Hi Simon,
>>>>> it actually sounds like a bug to me, as node are supposed to
>>>>> support gluster.
>>>>>
>>>>> Ryan / Fabian- thoughts?
>>>>
>>>> Hey,
>>>>
>>>> I vaguely remember that we were seeing a bug like this some time ago.
>>>> We fixed /var/lib/glusterd to be writable (using tmpfs), but it can
>>>> actually be that we need to persist those contents.
>>>>
>>>> But Simon, can you give details which configuration files are
>>>> missing and why glusterd is not starting?
>>> Is glusterd starting? I'm getting the impression that it's starting, but that it has no configuration. As far as I know, Gluster keeps most of the configuration on the brick itself, but it finds brick information in /var/lib/glusterd.
>>>
>>> The last patch simply opened the firewall, and it's entirely possible that we need to persist this. It may be a good idea to just persist the entire directory from the get-go, unless we want to try to have a thread watching /var/lib/glusterd for relevant files, but then we're stuck trying to keep up with what's happening with gluster itself...
>>>
>>> Can we
>>>>
>>>> Thanks
>>>> fabian
>>>>
>>>>> Either way I suggest you take a look in the below link-
>>>>> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_P
>>>>> er
>>>>> sisting_changes
>>>>>
>>>>> Let s know how it works.
>>>>>
>>>>> Doron
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
More information about the Users
mailing list