From Simon.Barrett at tradingscreen.com Fri May 23 04:29:43 2014 Content-Type: multipart/mixed; boundary="===============2824825375309973127==" MIME-Version: 1.0 From: Simon Barrett To: users at ovirt.org Subject: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Fri, 23 May 2014 08:29:39 +0000 Message-ID: --===============2824825375309973127== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable --_000_D86C48DF8800164BBE50B87623F7AC954836B051ln2wio001devtra_ Content-Type: text/plain; charset=3D"us-ascii" Content-Transfer-Encoding: quoted-printable I am working through the setup of oVirt node for a 3.4.1 deployment. I setup some glusterfs volumes/bricks on oVirt Node Hypervisor release 3.0.= =3D 4 (1.0.201401291204.el6) and created a storage domain. All was working OK u= =3D ntil I rebooted the node and found that the glusterfs configuration had not= =3D been retained. Is there something I should be doing to persist any glusterfs configuration= =3D so it survives a node reboot? Many thanks, Simon --_000_D86C48DF8800164BBE50B87623F7AC954836B051ln2wio001devtra_ Content-Type: text/html; charset=3D"us-ascii" Content-Transfer-Encoding: quoted-printable

I am working through the setup of oVirt node for a= 3=3D .4.1 deployment.

 

I setup some glusterfs volumes/bricks on oVirt Nod= e =3D Hypervisor release 3.0.4 (1.0.201401291204.el6) and created a storage domai= =3D n. All was working OK until I rebooted the node and found that the glusterf= =3D s configuration had not been retained.

 

Is there something I should be doing to persist an= y =3D glusterfs configuration so it survives a node reboot?

 

Many thanks,

 

Simon

 

--_000_D86C48DF8800164BBE50B87623F7AC954836B051ln2wio001devtra_-- --===============2824825375309973127== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" LS1fMDAwX0Q4NkM0OERGODgwMDE2NEJCRTUwQjg3NjIzRjdBQzk1NDgzNkIwNTFsbjJ3aW8wMDFk ZXZ0cmFfCkNvbnRlbnQtVHlwZTogdGV4dC9wbGFpbjsgY2hhcnNldD0idXMtYXNjaWkiCkNvbnRl bnQtVHJhbnNmZXItRW5jb2Rpbmc6IHF1b3RlZC1wcmludGFibGUKCkkgYW0gd29ya2luZyB0aHJv dWdoIHRoZSBzZXR1cCBvZiBvVmlydCBub2RlIGZvciBhIDMuNC4xIGRlcGxveW1lbnQuCgpJIHNl dHVwIHNvbWUgZ2x1c3RlcmZzIHZvbHVtZXMvYnJpY2tzIG9uIG9WaXJ0IE5vZGUgSHlwZXJ2aXNv ciByZWxlYXNlIDMuMC49CjQgKDEuMC4yMDE0MDEyOTEyMDQuZWw2KSBhbmQgY3JlYXRlZCBhIHN0 b3JhZ2UgZG9tYWluLiBBbGwgd2FzIHdvcmtpbmcgT0sgdT0KbnRpbCBJIHJlYm9vdGVkIHRoZSBu b2RlIGFuZCBmb3VuZCB0aGF0IHRoZSBnbHVzdGVyZnMgY29uZmlndXJhdGlvbiBoYWQgbm90PQog YmVlbiByZXRhaW5lZC4KCklzIHRoZXJlIHNvbWV0aGluZyBJIHNob3VsZCBiZSBkb2luZyB0byBw ZXJzaXN0IGFueSBnbHVzdGVyZnMgY29uZmlndXJhdGlvbj0KIHNvIGl0IHN1cnZpdmVzIGEgbm9k ZSByZWJvb3Q/CgpNYW55IHRoYW5rcywKClNpbW9uCgoKLS1fMDAwX0Q4NkM0OERGODgwMDE2NEJC RTUwQjg3NjIzRjdBQzk1NDgzNkIwNTFsbjJ3aW8wMDFkZXZ0cmFfCkNvbnRlbnQtVHlwZTogdGV4 dC9odG1sOyBjaGFyc2V0PSJ1cy1hc2NpaSIKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogcXVv dGVkLXByaW50YWJsZQoKPGh0bWwgeG1sbnM6dj0zRCJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29t OnZtbCIgeG1sbnM6bz0zRCJ1cm46c2NoZW1hcy1taWNyPQpvc29mdC1jb206b2ZmaWNlOm9mZmlj ZSIgeG1sbnM6dz0zRCJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiA9Cnht bG5zOm09M0QiaHR0cDovL3NjaGVtYXMubWljcm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1s IiB4bWxucz0zRCJodHRwOj0KLy93d3cudzMub3JnL1RSL1JFQy1odG1sNDAiPgo8aGVhZD4KPG1l dGEgaHR0cC1lcXVpdj0zRCJDb250ZW50LVR5cGUiIGNvbnRlbnQ9M0QidGV4dC9odG1sOyBjaGFy c2V0PTNEdXMtYXNjaWkiPQo+CjxtZXRhIG5hbWU9M0QiR2VuZXJhdG9yIiBjb250ZW50PTNEIk1p Y3Jvc29mdCBXb3JkIDE1IChmaWx0ZXJlZCBtZWRpdW0pIj4KPHN0eWxlPjwhLS0KLyogRm9udCBE ZWZpbml0aW9ucyAqLwpAZm9udC1mYWNlCgl7Zm9udC1mYW1pbHk6Q2FsaWJyaTsKCXBhbm9zZS0x OjIgMTUgNSAyIDIgMiA0IDMgMiA0O30KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8KcC5Nc29Ob3Jt YWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1zb05vcm1hbAoJe21hcmdpbjowY207CgltYXJnaW4tYm90 dG9tOi4wMDAxcHQ7Cglmb250LXNpemU6MTEuMHB0OwoJZm9udC1mYW1pbHk6IkNhbGlicmkiLCJz YW5zLXNlcmlmIjsKCW1zby1mYXJlYXN0LWxhbmd1YWdlOkVOLVVTO30KYTpsaW5rLCBzcGFuLk1z b0h5cGVybGluawoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsKCWNvbG9yOiMwNTYzQzE7Cgl0ZXh0 LWRlY29yYXRpb246dW5kZXJsaW5lO30KYTp2aXNpdGVkLCBzcGFuLk1zb0h5cGVybGlua0ZvbGxv d2VkCgl7bXNvLXN0eWxlLXByaW9yaXR5Ojk5OwoJY29sb3I6Izk1NEY3MjsKCXRleHQtZGVjb3Jh dGlvbjp1bmRlcmxpbmU7fQpzcGFuLkVtYWlsU3R5bGUxNwoJe21zby1zdHlsZS10eXBlOnBlcnNv bmFsLWNvbXBvc2U7Cglmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiOwoJY29sb3I6 d2luZG93dGV4dDt9Ci5Nc29DaHBEZWZhdWx0Cgl7bXNvLXN0eWxlLXR5cGU6ZXhwb3J0LW9ubHk7 Cglmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiOwoJbXNvLWZhcmVhc3QtbGFuZ3Vh Z2U6RU4tVVM7fQpAcGFnZSBXb3JkU2VjdGlvbjEKCXtzaXplOjYxMi4wcHQgNzkyLjBwdDsKCW1h cmdpbjo3Mi4wcHQgNzIuMHB0IDcyLjBwdCA3Mi4wcHQ7fQpkaXYuV29yZFNlY3Rpb24xCgl7cGFn ZTpXb3JkU2VjdGlvbjE7fQotLT48L3N0eWxlPjwhLS1baWYgZ3RlIG1zbyA5XT48eG1sPgo8bzpz aGFwZWRlZmF1bHRzIHY6ZXh0PTNEImVkaXQiIHNwaWRtYXg9M0QiMTAyNiIgLz4KPC94bWw+PCFb ZW5kaWZdLS0+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+CjxvOnNoYXBlbGF5b3V0IHY6ZXh0PTNE ImVkaXQiPgo8bzppZG1hcCB2OmV4dD0zRCJlZGl0IiBkYXRhPTNEIjEiIC8+CjwvbzpzaGFwZWxh eW91dD48L3htbD48IVtlbmRpZl0tLT4KPC9oZWFkPgo8Ym9keSBsYW5nPTNEIkVOLUdCIiBsaW5r PTNEIiMwNTYzQzEiIHZsaW5rPTNEIiM5NTRGNzIiPgo8ZGl2IGNsYXNzPTNEIldvcmRTZWN0aW9u MSI+CjxwIGNsYXNzPTNEIk1zb05vcm1hbCI+SSBhbSB3b3JraW5nIHRocm91Z2ggdGhlIHNldHVw IG9mIG9WaXJ0IG5vZGUgZm9yIGEgMz0KLjQuMSBkZXBsb3ltZW50LjxvOnA+PC9vOnA+PC9wPgo8 cCBjbGFzcz0zRCJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPgo8cCBjbGFzcz0zRCJN c29Ob3JtYWwiPkkgc2V0dXAgc29tZSBnbHVzdGVyZnMgdm9sdW1lcy9icmlja3Mgb24gb1ZpcnQg Tm9kZSA9Ckh5cGVydmlzb3IgcmVsZWFzZSAzLjAuNCAoMS4wLjIwMTQwMTI5MTIwNC5lbDYpIGFu ZCBjcmVhdGVkIGEgc3RvcmFnZSBkb21haT0Kbi4gQWxsIHdhcyB3b3JraW5nIE9LIHVudGlsIEkg cmVib290ZWQgdGhlIG5vZGUgYW5kIGZvdW5kIHRoYXQgdGhlIGdsdXN0ZXJmPQpzIGNvbmZpZ3Vy YXRpb24gaGFkIG5vdCBiZWVuIHJldGFpbmVkLjxvOnA+PC9vOnA+PC9wPgo8cCBjbGFzcz0zRCJN c29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPgo8cCBjbGFzcz0zRCJNc29Ob3JtYWwiPklz IHRoZXJlIHNvbWV0aGluZyBJIHNob3VsZCBiZSBkb2luZyB0byBwZXJzaXN0IGFueSA9CmdsdXN0 ZXJmcyBjb25maWd1cmF0aW9uIHNvIGl0IHN1cnZpdmVzIGEgbm9kZSByZWJvb3Q/PG86cD48L286 cD48L3A+CjxwIGNsYXNzPTNEIk1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+CjxwIGNs YXNzPTNEIk1zb05vcm1hbCI+TWFueSB0aGFua3MsPG86cD48L286cD48L3A+CjxwIGNsYXNzPTNE Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+CjxwIGNsYXNzPTNEIk1zb05vcm1hbCI+ U2ltb248bzpwPjwvbzpwPjwvcD4KPHAgY2xhc3M9M0QiTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwv bzpwPjwvcD4KPC9kaXY+CjwvYm9keT4KPC9odG1sPgoKLS1fMDAwX0Q4NkM0OERGODgwMDE2NEJC RTUwQjg3NjIzRjdBQzk1NDgzNkIwNTFsbjJ3aW8wMDFkZXZ0cmFfLS0K --===============2824825375309973127==-- From dfediuck at redhat.com Sun May 25 08:19:02 2014 Content-Type: multipart/mixed; boundary="===============0041896682301284505==" MIME-Version: 1.0 From: Doron Fediuck To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Sun, 25 May 2014 08:18:58 -0400 Message-ID: <331282736.14300859.1401020338104.JavaMail.zimbra@redhat.com> In-Reply-To: D86C48DF8800164BBE50B87623F7AC954836B051@ln2-wio-001.dev.tradingscreen.com --===============0041896682301284505== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable ----- Original Message ----- > From: "Simon Barrett" > To: users(a)ovirt.org > Sent: Friday, May 23, 2014 11:29:39 AM > Subject: [ovirt-users] Persisting glusterfs configs on an oVirt node > = > = > = > I am working through the setup of oVirt node for a 3.4.1 deployment. > = > = > = > I setup some glusterfs volumes/bricks on oVirt Node Hypervisor release 3.= 0.4 > (1.0.201401291204.el6) and created a storage domain. All was working OK > until I rebooted the node and found that the glusterfs configuration had = not > been retained. > = > = > = > Is there something I should be doing to persist any glusterfs configurati= on > so it survives a node reboot? > = > = > = > Many thanks, > = > = > = > Simon > = Hi Simon, it actually sounds like a bug to me, as node are supposed to support gluster. Ryan / Fabian- thoughts? Either way I suggest you take a look in the below link- http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Persistin= g_changes Let s know how it works. Doron --===============0041896682301284505==-- From fdeutsch at redhat.com Mon May 26 04:14:48 2014 Content-Type: multipart/mixed; boundary="===============5711501700516113183==" MIME-Version: 1.0 From: Fabian Deutsch To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Mon, 26 May 2014 10:14:42 +0200 Message-ID: <1401092082.2990.6.camel@fdeutsch-laptop.local> In-Reply-To: 331282736.14300859.1401020338104.JavaMail.zimbra@redhat.com --===============5711501700516113183== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: > = > ----- Original Message ----- > > From: "Simon Barrett" > > To: users(a)ovirt.org > > Sent: Friday, May 23, 2014 11:29:39 AM > > Subject: [ovirt-users] Persisting glusterfs configs on an oVirt node > > = > > = > > = > > I am working through the setup of oVirt node for a 3.4.1 deployment. > > = > > = > > = > > I setup some glusterfs volumes/bricks on oVirt Node Hypervisor release = 3.0.4 > > (1.0.201401291204.el6) and created a storage domain. All was working OK > > until I rebooted the node and found that the glusterfs configuration ha= d not > > been retained. > > = > > = > > = > > Is there something I should be doing to persist any glusterfs configura= tion > > so it survives a node reboot? > > = > > = > > = > > Many thanks, > > = > > = > > = > > Simon > > = > = > Hi Simon, > it actually sounds like a bug to me, as node are supposed to support > gluster. > = > Ryan / Fabian- thoughts? Hey, I vaguely remember that we were seeing a bug like this some time ago. We fixed /var/lib/glusterd to be writable (using tmpfs), but it can actually be that we need to persist those contents. But Simon, can you give details which configuration files are missing and why glusterd is not starting? Thanks fabian > Either way I suggest you take a look in the below link- > http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Persist= ing_changes > = > Let s know how it works. > = > Doron --===============5711501700516113183==-- From rbarry at redhat.com Tue May 27 09:01:05 2014 Content-Type: multipart/mixed; boundary="===============7315563953409040559==" MIME-Version: 1.0 From: Ryan Barry To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Tue, 27 May 2014 09:00:58 -0400 Message-ID: <53848C8A.1060304@redhat.com> In-Reply-To: 1401092082.2990.6.camel@fdeutsch-laptop.local --===============7315563953409040559== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 05/26/2014 04:14 AM, Fabian Deutsch wrote: > Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: >> >> ----- Original Message ----- >>> From: "Simon Barrett" >>> To: users(a)ovirt.org >>> Sent: Friday, May 23, 2014 11:29:39 AM >>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt node >>> >>> >>> >>> I am working through the setup of oVirt node for a 3.4.1 deployment. >>> >>> >>> >>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor release = 3.0.4 >>> (1.0.201401291204.el6) and created a storage domain. All was working OK >>> until I rebooted the node and found that the glusterfs configuration ha= d not >>> been retained. >>> >>> >>> >>> Is there something I should be doing to persist any glusterfs configura= tion >>> so it survives a node reboot? >>> >>> >>> >>> Many thanks, >>> >>> >>> >>> Simon >>> >> >> Hi Simon, >> it actually sounds like a bug to me, as node are supposed to support >> gluster. >> >> Ryan / Fabian- thoughts? > > Hey, > > I vaguely remember that we were seeing a bug like this some time ago. > We fixed /var/lib/glusterd to be writable (using tmpfs), but it can > actually be that we need to persist those contents. > > But Simon, can you give details which configuration files are missing > and why glusterd is not starting? Is glusterd starting? I'm getting the impression that it's starting, but = that it has no configuration. As far as I know, Gluster keeps most of = the configuration on the brick itself, but it finds brick information in = /var/lib/glusterd. The last patch simply opened the firewall, and it's entirely possible = that we need to persist this. It may be a good idea to just persist the = entire directory from the get-go, unless we want to try to have a thread = watching /var/lib/glusterd for relevant files, but then we're stuck = trying to keep up with what's happening with gluster itself... Can we > > Thanks > fabian > >> Either way I suggest you take a look in the below link- >> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Persis= ting_changes >> >> Let s know how it works. >> >> Doron > > --===============7315563953409040559==-- From Simon.Barrett at tradingscreen.com Wed May 28 09:12:24 2014 Content-Type: multipart/mixed; boundary="===============3715528414293643541==" MIME-Version: 1.0 From: Simon Barrett To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Wed, 28 May 2014 13:12:02 +0000 Message-ID: In-Reply-To: 53848C8A.1060304@redhat.com --===============3715528414293643541== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Thanks for the replies. I cannot get glusterd to start on boot and I lose all gluster config every = reboot. = The following shows what I did on the node to start glusterd, create a volu= me etc, followed by the state of the node after a reboot. = [root(a)ovirt_node]# service glusterd status glusterd is stopped [root(a)ovirt_node]# chkconfig --list glusterd glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off [root(a)ovirt_node]# service glusterd start Starting glusterd:[ OK ] gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore volume create: vmstore: success: please start the volume to access data gluster> vol start vmstore volume start: vmstore: success gluster> vol info Volume Name: vmstore Type: Distribute Volume ID: 5bd01043-1352-4014-88ca-e632e264d088 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.22.8.46:/data/glusterfs/vmstore [root(a)ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks = node_state.info = trusted-vmstore-fuse.vol cksum = rbstate = vmstore.10.22.8.46.data-glusterfs-vmstore.vol info = run = vmstore-fuse.vol [root(a)ovirt_node]# grep gluster /etc/rwtab.d/* /etc/rwtab.d/ovirt:files /var/lib/glusterd [root(a)ovirt_node]# chkconfig glusterd on [root(a)ovirt_node]# chkconfig --list glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off #################################### I then reboot the node and see the following: #################################### [root(a)ovirt_node]# service glusterd status glusterd is stopped [root(a)ovirt_node]# chkconfig --list glusterd glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off [root(a)ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0 No more gluster volume configuration files. I've taken a look through http://www.ovirt.org/Node_Troubleshooting#Making_= changes_last_.2F_Persisting_changes but I'm unsure what needs to be done to= persist this configuration. To get glusterd to start on boot, do I need to manually persist /etc/rc* fi= les? I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is this a = list of the files/dirs that should be persisted automatically? If so, is it= recursive and should it include everything in /var/lib/glusterd/vols? TIA for any help with this. Simon -----Original Message----- From: Ryan Barry [mailto:rbarry(a)redhat.com] = Sent: 27 May 2014 14:01 To: Fabian Deutsch; Doron Fediuck; Simon Barrett Cc: users(a)ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node On 05/26/2014 04:14 AM, Fabian Deutsch wrote: > Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: >> >> ----- Original Message ----- >>> From: "Simon Barrett" >>> To: users(a)ovirt.org >>> Sent: Friday, May 23, 2014 11:29:39 AM >>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt node >>> >>> >>> >>> I am working through the setup of oVirt node for a 3.4.1 deployment. >>> >>> >>> >>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor = >>> release 3.0.4 >>> (1.0.201401291204.el6) and created a storage domain. All was working = >>> OK until I rebooted the node and found that the glusterfs = >>> configuration had not been retained. >>> >>> >>> >>> Is there something I should be doing to persist any glusterfs = >>> configuration so it survives a node reboot? >>> >>> >>> >>> Many thanks, >>> >>> >>> >>> Simon >>> >> >> Hi Simon, >> it actually sounds like a bug to me, as node are supposed to support = >> gluster. >> >> Ryan / Fabian- thoughts? > > Hey, > > I vaguely remember that we were seeing a bug like this some time ago. > We fixed /var/lib/glusterd to be writable (using tmpfs), but it can = > actually be that we need to persist those contents. > > But Simon, can you give details which configuration files are missing = > and why glusterd is not starting? Is glusterd starting? I'm getting the impression that it's starting, but th= at it has no configuration. As far as I know, Gluster keeps most of the con= figuration on the brick itself, but it finds brick information in /var/lib/= glusterd. The last patch simply opened the firewall, and it's entirely possible that = we need to persist this. It may be a good idea to just persist the entire d= irectory from the get-go, unless we want to try to have a thread watching /= var/lib/glusterd for relevant files, but then we're stuck trying to keep up= with what's happening with gluster itself... Can we > > Thanks > fabian > >> Either way I suggest you take a look in the below link- = >> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Per >> sisting_changes >> >> Let s know how it works. >> >> Doron > > --===============3715528414293643541==-- From Simon.Barrett at tradingscreen.com Wed May 28 10:14:48 2014 Content-Type: multipart/mixed; boundary="===============5764181621185522448==" MIME-Version: 1.0 From: Simon Barrett To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Wed, 28 May 2014 14:14:26 +0000 Message-ID: In-Reply-To: D86C48DF8800164BBE50B87623F7AC9548378534@ln2-wio-001.dev.tradingscreen.com --===============5764181621185522448== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable I did a "persist /var/lib/glusterd" and things are looking better. The glus= ter config is now still in place after a reboot. As a workaround to getting glusterd running on boot, I added "service glust= erd start" to /etc/rc.local and ran persist /etc/rc.local. It appears to be= working but feels like a bit of a hack. Does anyone have any other suggestions as to the correct way to do this? Thanks, Simon -----Original Message----- From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On Behal= f Of Simon Barrett Sent: 28 May 2014 14:12 To: Ryan Barry; Fabian Deutsch; Doron Fediuck Cc: users(a)ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Thanks for the replies. I cannot get glusterd to start on boot and I lose all gluster config every = reboot. = The following shows what I did on the node to start glusterd, create a volu= me etc, followed by the state of the node after a reboot. = [root(a)ovirt_node]# service glusterd status glusterd is stopped [root(a)ovirt_node]# chkconfig --list glusterd glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off [root(a)ovirt_node]# service glusterd start Starting glusterd:[ OK ] gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore volume create: vmstore: success: please start the volume to access data gluster> vol start vmstore volume start: vmstore: success gluster> vol info Volume Name: vmstore Type: Distribute Volume ID: 5bd01043-1352-4014-88ca-e632e264d088 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.22.8.46:/data/glusterfs/vmstore [root(a)ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks node_stat= e.info trusted-vmstore-fuse.vol cksum = rbstate = vmstore.10.22.8.46.data-glusterfs-vmstore.vol info = run = vmstore-fuse.vol [root(a)ovirt_node]# grep gluster /etc/rwtab.d/* /etc/rwtab.d/ovirt:files /var/lib/glusterd [root(a)ovirt_node]# chkconfig glusterd on [root(a)ovirt_node]# chkconfig -= -list glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off #################################### I then reboot the node and see the following: #################################### [root(a)ovirt_node]# service glusterd status glusterd is stopped [root(a)ovirt_node]# chkconfig --list glusterd glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off [root(a)ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0 No more gluster volume configuration files. I've taken a look through http://www.ovirt.org/Node_Troubleshooting#Making_= changes_last_.2F_Persisting_changes but I'm unsure what needs to be done to= persist this configuration. To get glusterd to start on boot, do I need to manually persist /etc/rc* fi= les? I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is this a = list of the files/dirs that should be persisted automatically? If so, is it= recursive and should it include everything in /var/lib/glusterd/vols? TIA for any help with this. Simon -----Original Message----- From: Ryan Barry [mailto:rbarry(a)redhat.com] Sent: 27 May 2014 14:01 To: Fabian Deutsch; Doron Fediuck; Simon Barrett Cc: users(a)ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node On 05/26/2014 04:14 AM, Fabian Deutsch wrote: > Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: >> >> ----- Original Message ----- >>> From: "Simon Barrett" >>> To: users(a)ovirt.org >>> Sent: Friday, May 23, 2014 11:29:39 AM >>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt node >>> >>> >>> >>> I am working through the setup of oVirt node for a 3.4.1 deployment. >>> >>> >>> >>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor = >>> release 3.0.4 >>> (1.0.201401291204.el6) and created a storage domain. All was working = >>> OK until I rebooted the node and found that the glusterfs = >>> configuration had not been retained. >>> >>> >>> >>> Is there something I should be doing to persist any glusterfs = >>> configuration so it survives a node reboot? >>> >>> >>> >>> Many thanks, >>> >>> >>> >>> Simon >>> >> >> Hi Simon, >> it actually sounds like a bug to me, as node are supposed to support = >> gluster. >> >> Ryan / Fabian- thoughts? > > Hey, > > I vaguely remember that we were seeing a bug like this some time ago. > We fixed /var/lib/glusterd to be writable (using tmpfs), but it can = > actually be that we need to persist those contents. > > But Simon, can you give details which configuration files are missing = > and why glusterd is not starting? Is glusterd starting? I'm getting the impression that it's starting, but th= at it has no configuration. As far as I know, Gluster keeps most of the con= figuration on the brick itself, but it finds brick information in /var/lib/= glusterd. The last patch simply opened the firewall, and it's entirely possible that = we need to persist this. It may be a good idea to just persist the entire d= irectory from the get-go, unless we want to try to have a thread watching /= var/lib/glusterd for relevant files, but then we're stuck trying to keep up= with what's happening with gluster itself... Can we > > Thanks > fabian > >> Either way I suggest you take a look in the below link- = >> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Per >> sisting_changes >> >> Let s know how it works. >> >> Doron > > _______________________________________________ Users mailing list Users(a)ovirt.org http://lists.ovirt.org/mailman/listinfo/users --===============5764181621185522448==-- From fdeutsch at redhat.com Wed May 28 10:19:50 2014 Content-Type: multipart/mixed; boundary="===============5323407058295484315==" MIME-Version: 1.0 From: Fabian Deutsch To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Wed, 28 May 2014 16:19:47 +0200 Message-ID: <1401286787.2734.2.camel@fdeutsch-laptop.local> In-Reply-To: D86C48DF8800164BBE50B87623F7AC9548378684@ln2-wio-001.dev.tradingscreen.com --===============5323407058295484315== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Am Mittwoch, den 28.05.2014, 14:14 +0000 schrieb Simon Barrett: > I did a "persist /var/lib/glusterd" and things are looking better. The gl= uster config is now still in place after a reboot. > = > As a workaround to getting glusterd running on boot, I added "service glu= sterd start" to /etc/rc.local and ran persist /etc/rc.local. It appears to = be working but feels like a bit of a hack. > = > Does anyone have any other suggestions as to the correct way to do this? Hey Simon, I was also investigating both the steps you did. And was also about to recommend them :) They are more a workaround. We basically need some patches to change the defaults on Node, to let gluster work out of the box. This would include persisting the correct paths and enabling glusterd if enabled. - fabian > Thanks, > = > Simon > = > -----Original Message----- > From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On Beh= alf Of Simon Barrett > Sent: 28 May 2014 14:12 > To: Ryan Barry; Fabian Deutsch; Doron Fediuck > Cc: users(a)ovirt.org > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node > = > Thanks for the replies. > = > I cannot get glusterd to start on boot and I lose all gluster config ever= y reboot. = > = > The following shows what I did on the node to start glusterd, create a vo= lume etc, followed by the state of the node after a reboot. = > = > = > [root(a)ovirt_node]# service glusterd status glusterd is stopped > = > [root(a)ovirt_node]# chkconfig --list glusterd > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > = > [root(a)ovirt_node]# service glusterd start Starting glusterd:[ OK ] > = > gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore > volume create: vmstore: success: please start the volume to access data > = > gluster> vol start vmstore > volume start: vmstore: success > = > gluster> vol info > Volume Name: vmstore > Type: Distribute > Volume ID: 5bd01043-1352-4014-88ca-e632e264d088 > Status: Started > Number of Bricks: 1 > Transport-type: tcp > Bricks: > Brick1: 10.22.8.46:/data/glusterfs/vmstore > = > [root(a)ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks node_st= ate.info trusted-vmstore-fuse.vol cksum = > rbstate = > vmstore.10.22.8.46.data-glusterfs-vmstore.vol > info = > run = > vmstore-fuse.vol > = > [root(a)ovirt_node]# grep gluster /etc/rwtab.d/* > /etc/rwtab.d/ovirt:files /var/lib/glusterd > = > [root(a)ovirt_node]# chkconfig glusterd on [root(a)ovirt_node]# chkconfig= --list glusterd > glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off > = > = > #################################### > I then reboot the node and see the following: > #################################### > = > [root(a)ovirt_node]# service glusterd status glusterd is stopped > = > [root(a)ovirt_node]# chkconfig --list glusterd > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > = > [root(a)ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0 > = > No more gluster volume configuration files. > = > I've taken a look through http://www.ovirt.org/Node_Troubleshooting#Makin= g_changes_last_.2F_Persisting_changes but I'm unsure what needs to be done = to persist this configuration. > = > To get glusterd to start on boot, do I need to manually persist /etc/rc* = files? > = > I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is this = a list of the files/dirs that should be persisted automatically? If so, is = it recursive and should it include everything in /var/lib/glusterd/vols? > = > = > TIA for any help with this. > = > Simon > = > = > = > -----Original Message----- > From: Ryan Barry [mailto:rbarry(a)redhat.com] > Sent: 27 May 2014 14:01 > To: Fabian Deutsch; Doron Fediuck; Simon Barrett > Cc: users(a)ovirt.org > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node > = > On 05/26/2014 04:14 AM, Fabian Deutsch wrote: > > Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: > >> > >> ----- Original Message ----- > >>> From: "Simon Barrett" > >>> To: users(a)ovirt.org > >>> Sent: Friday, May 23, 2014 11:29:39 AM > >>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt node > >>> > >>> > >>> > >>> I am working through the setup of oVirt node for a 3.4.1 deployment. > >>> > >>> > >>> > >>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor = > >>> release 3.0.4 > >>> (1.0.201401291204.el6) and created a storage domain. All was working = > >>> OK until I rebooted the node and found that the glusterfs = > >>> configuration had not been retained. > >>> > >>> > >>> > >>> Is there something I should be doing to persist any glusterfs = > >>> configuration so it survives a node reboot? > >>> > >>> > >>> > >>> Many thanks, > >>> > >>> > >>> > >>> Simon > >>> > >> > >> Hi Simon, > >> it actually sounds like a bug to me, as node are supposed to support = > >> gluster. > >> > >> Ryan / Fabian- thoughts? > > > > Hey, > > > > I vaguely remember that we were seeing a bug like this some time ago. > > We fixed /var/lib/glusterd to be writable (using tmpfs), but it can = > > actually be that we need to persist those contents. > > > > But Simon, can you give details which configuration files are missing = > > and why glusterd is not starting? > Is glusterd starting? I'm getting the impression that it's starting, but = that it has no configuration. As far as I know, Gluster keeps most of the c= onfiguration on the brick itself, but it finds brick information in /var/li= b/glusterd. > = > The last patch simply opened the firewall, and it's entirely possible tha= t we need to persist this. It may be a good idea to just persist the entire= directory from the get-go, unless we want to try to have a thread watching= /var/lib/glusterd for relevant files, but then we're stuck trying to keep = up with what's happening with gluster itself... > = > Can we > > > > Thanks > > fabian > > > >> Either way I suggest you take a look in the below link- = > >> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Per > >> sisting_changes > >> > >> Let s know how it works. > >> > >> Doron > > > > > = > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============5323407058295484315==-- From Simon.Barrett at tradingscreen.com Wed May 28 10:22:42 2014 Content-Type: multipart/mixed; boundary="===============4355795473578731703==" MIME-Version: 1.0 From: Simon Barrett To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Wed, 28 May 2014 14:22:17 +0000 Message-ID: In-Reply-To: 1401286787.2734.2.camel@fdeutsch-laptop.local --===============4355795473578731703== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable I just wasn't sure if I was missing something in the configuration to enabl= e this. I'll stick with the workarounds I have for now and see how it goes. = Thanks again. Simon -----Original Message----- From: Fabian Deutsch [mailto:fdeutsch(a)redhat.com] = Sent: 28 May 2014 15:20 To: Simon Barrett Cc: Ryan Barry; Doron Fediuck; users(a)ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Am Mittwoch, den 28.05.2014, 14:14 +0000 schrieb Simon Barrett: > I did a "persist /var/lib/glusterd" and things are looking better. The gl= uster config is now still in place after a reboot. > = > As a workaround to getting glusterd running on boot, I added "service glu= sterd start" to /etc/rc.local and ran persist /etc/rc.local. It appears to = be working but feels like a bit of a hack. > = > Does anyone have any other suggestions as to the correct way to do this? Hey Simon, I was also investigating both the steps you did. And was also about to reco= mmend them :) They are more a workaround. We basically need some patches to change the defaults on Node, to let glust= er work out of the box. This would include persisting the correct paths and enabling glusterd if en= abled. - fabian > Thanks, > = > Simon > = > -----Original Message----- > From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On = > Behalf Of Simon Barrett > Sent: 28 May 2014 14:12 > To: Ryan Barry; Fabian Deutsch; Doron Fediuck > Cc: users(a)ovirt.org > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt = > node > = > Thanks for the replies. > = > I cannot get glusterd to start on boot and I lose all gluster config ever= y reboot. = > = > The following shows what I did on the node to start glusterd, create a vo= lume etc, followed by the state of the node after a reboot. = > = > = > [root(a)ovirt_node]# service glusterd status glusterd is stopped > = > [root(a)ovirt_node]# chkconfig --list glusterd > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > = > [root(a)ovirt_node]# service glusterd start Starting glusterd:[ OK ] > = > gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore > volume create: vmstore: success: please start the volume to access = > data > = > gluster> vol start vmstore > volume start: vmstore: success > = > gluster> vol info > Volume Name: vmstore > Type: Distribute > Volume ID: 5bd01043-1352-4014-88ca-e632e264d088 > Status: Started > Number of Bricks: 1 > Transport-type: tcp > Bricks: > Brick1: 10.22.8.46:/data/glusterfs/vmstore > = > [root(a)ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks node_st= ate.info trusted-vmstore-fuse.vol cksum = > rbstate = > vmstore.10.22.8.46.data-glusterfs-vmstore.vol > info = > run = > vmstore-fuse.vol > = > [root(a)ovirt_node]# grep gluster /etc/rwtab.d/* > /etc/rwtab.d/ovirt:files /var/lib/glusterd > = > [root(a)ovirt_node]# chkconfig glusterd on [root(a)ovirt_node]# chkconfig= --list glusterd > glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off > = > = > #################################### > I then reboot the node and see the following: > #################################### > = > [root(a)ovirt_node]# service glusterd status glusterd is stopped > = > [root(a)ovirt_node]# chkconfig --list glusterd > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > = > [root(a)ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0 > = > No more gluster volume configuration files. > = > I've taken a look through http://www.ovirt.org/Node_Troubleshooting#Makin= g_changes_last_.2F_Persisting_changes but I'm unsure what needs to be done = to persist this configuration. > = > To get glusterd to start on boot, do I need to manually persist /etc/rc* = files? > = > I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is this = a list of the files/dirs that should be persisted automatically? If so, is = it recursive and should it include everything in /var/lib/glusterd/vols? > = > = > TIA for any help with this. > = > Simon > = > = > = > -----Original Message----- > From: Ryan Barry [mailto:rbarry(a)redhat.com] > Sent: 27 May 2014 14:01 > To: Fabian Deutsch; Doron Fediuck; Simon Barrett > Cc: users(a)ovirt.org > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt = > node > = > On 05/26/2014 04:14 AM, Fabian Deutsch wrote: > > Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: > >> > >> ----- Original Message ----- > >>> From: "Simon Barrett" > >>> To: users(a)ovirt.org > >>> Sent: Friday, May 23, 2014 11:29:39 AM > >>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt = > >>> node > >>> > >>> > >>> > >>> I am working through the setup of oVirt node for a 3.4.1 deployment. > >>> > >>> > >>> > >>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor = > >>> release 3.0.4 > >>> (1.0.201401291204.el6) and created a storage domain. All was = > >>> working OK until I rebooted the node and found that the glusterfs = > >>> configuration had not been retained. > >>> > >>> > >>> > >>> Is there something I should be doing to persist any glusterfs = > >>> configuration so it survives a node reboot? > >>> > >>> > >>> > >>> Many thanks, > >>> > >>> > >>> > >>> Simon > >>> > >> > >> Hi Simon, > >> it actually sounds like a bug to me, as node are supposed to = > >> support gluster. > >> > >> Ryan / Fabian- thoughts? > > > > Hey, > > > > I vaguely remember that we were seeing a bug like this some time ago. > > We fixed /var/lib/glusterd to be writable (using tmpfs), but it can = > > actually be that we need to persist those contents. > > > > But Simon, can you give details which configuration files are = > > missing and why glusterd is not starting? > Is glusterd starting? I'm getting the impression that it's starting, but = that it has no configuration. As far as I know, Gluster keeps most of the c= onfiguration on the brick itself, but it finds brick information in /var/li= b/glusterd. > = > The last patch simply opened the firewall, and it's entirely possible tha= t we need to persist this. It may be a good idea to just persist the entire= directory from the get-go, unless we want to try to have a thread watching= /var/lib/glusterd for relevant files, but then we're stuck trying to keep = up with what's happening with gluster itself... > = > Can we > > > > Thanks > > fabian > > > >> Either way I suggest you take a look in the below link- = > >> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_P > >> er > >> sisting_changes > >> > >> Let s know how it works. > >> > >> Doron > > > > > = > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============4355795473578731703==-- From fdeutsch at redhat.com Wed May 28 10:23:29 2014 Content-Type: multipart/mixed; boundary="===============2257363532829620182==" MIME-Version: 1.0 From: Fabian Deutsch To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Wed, 28 May 2014 16:23:25 +0200 Message-ID: <1401287005.2734.3.camel@fdeutsch-laptop.local> In-Reply-To: D86C48DF8800164BBE50B87623F7AC95483786B1@ln2-wio-001.dev.tradingscreen.com --===============2257363532829620182== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Am Mittwoch, den 28.05.2014, 14:22 +0000 schrieb Simon Barrett: > I just wasn't sure if I was missing something in the configuration to ena= ble this. > = > I'll stick with the workarounds I have for now and see how it goes. = Hey Simon, it would be great if you could let us know about more issues, which will help us to improve the situation even more. Greetings fabian > Thanks again. > = > Simon > = > -----Original Message----- > From: Fabian Deutsch [mailto:fdeutsch(a)redhat.com] = > Sent: 28 May 2014 15:20 > To: Simon Barrett > Cc: Ryan Barry; Doron Fediuck; users(a)ovirt.org > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node > = > Am Mittwoch, den 28.05.2014, 14:14 +0000 schrieb Simon Barrett: > > I did a "persist /var/lib/glusterd" and things are looking better. The = gluster config is now still in place after a reboot. > > = > > As a workaround to getting glusterd running on boot, I added "service g= lusterd start" to /etc/rc.local and ran persist /etc/rc.local. It appears t= o be working but feels like a bit of a hack. > > = > > Does anyone have any other suggestions as to the correct way to do this? > = > Hey Simon, > = > I was also investigating both the steps you did. And was also about to re= commend them :) They are more a workaround. > = > We basically need some patches to change the defaults on Node, to let glu= ster work out of the box. > = > This would include persisting the correct paths and enabling glusterd if = enabled. > = > - fabian > = > > Thanks, > > = > > Simon > > = > > -----Original Message----- > > From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On = > > Behalf Of Simon Barrett > > Sent: 28 May 2014 14:12 > > To: Ryan Barry; Fabian Deutsch; Doron Fediuck > > Cc: users(a)ovirt.org > > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt = > > node > > = > > Thanks for the replies. > > = > > I cannot get glusterd to start on boot and I lose all gluster config ev= ery reboot. = > > = > > The following shows what I did on the node to start glusterd, create a = volume etc, followed by the state of the node after a reboot. = > > = > > = > > [root(a)ovirt_node]# service glusterd status glusterd is stopped > > = > > [root(a)ovirt_node]# chkconfig --list glusterd > > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > > = > > [root(a)ovirt_node]# service glusterd start Starting glusterd:[ OK ] > > = > > gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore > > volume create: vmstore: success: please start the volume to access = > > data > > = > > gluster> vol start vmstore > > volume start: vmstore: success > > = > > gluster> vol info > > Volume Name: vmstore > > Type: Distribute > > Volume ID: 5bd01043-1352-4014-88ca-e632e264d088 > > Status: Started > > Number of Bricks: 1 > > Transport-type: tcp > > Bricks: > > Brick1: 10.22.8.46:/data/glusterfs/vmstore > > = > > [root(a)ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks node_= state.info trusted-vmstore-fuse.vol cksum = > > rbstate = > > vmstore.10.22.8.46.data-glusterfs-vmstore.vol > > info = > > run = > > vmstore-fuse.vol > > = > > [root(a)ovirt_node]# grep gluster /etc/rwtab.d/* > > /etc/rwtab.d/ovirt:files /var/lib/glusterd > > = > > [root(a)ovirt_node]# chkconfig glusterd on [root(a)ovirt_node]# chkconf= ig --list glusterd > > glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off > > = > > = > > #################################### > > I then reboot the node and see the following: > > #################################### > > = > > [root(a)ovirt_node]# service glusterd status glusterd is stopped > > = > > [root(a)ovirt_node]# chkconfig --list glusterd > > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > > = > > [root(a)ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0 > > = > > No more gluster volume configuration files. > > = > > I've taken a look through http://www.ovirt.org/Node_Troubleshooting#Mak= ing_changes_last_.2F_Persisting_changes but I'm unsure what needs to be don= e to persist this configuration. > > = > > To get glusterd to start on boot, do I need to manually persist /etc/rc= * files? > > = > > I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is thi= s a list of the files/dirs that should be persisted automatically? If so, i= s it recursive and should it include everything in /var/lib/glusterd/vols? > > = > > = > > TIA for any help with this. > > = > > Simon > > = > > = > > = > > -----Original Message----- > > From: Ryan Barry [mailto:rbarry(a)redhat.com] > > Sent: 27 May 2014 14:01 > > To: Fabian Deutsch; Doron Fediuck; Simon Barrett > > Cc: users(a)ovirt.org > > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt = > > node > > = > > On 05/26/2014 04:14 AM, Fabian Deutsch wrote: > > > Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: > > >> > > >> ----- Original Message ----- > > >>> From: "Simon Barrett" > > >>> To: users(a)ovirt.org > > >>> Sent: Friday, May 23, 2014 11:29:39 AM > > >>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt = > > >>> node > > >>> > > >>> > > >>> > > >>> I am working through the setup of oVirt node for a 3.4.1 deployment. > > >>> > > >>> > > >>> > > >>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor = > > >>> release 3.0.4 > > >>> (1.0.201401291204.el6) and created a storage domain. All was = > > >>> working OK until I rebooted the node and found that the glusterfs = > > >>> configuration had not been retained. > > >>> > > >>> > > >>> > > >>> Is there something I should be doing to persist any glusterfs = > > >>> configuration so it survives a node reboot? > > >>> > > >>> > > >>> > > >>> Many thanks, > > >>> > > >>> > > >>> > > >>> Simon > > >>> > > >> > > >> Hi Simon, > > >> it actually sounds like a bug to me, as node are supposed to = > > >> support gluster. > > >> > > >> Ryan / Fabian- thoughts? > > > > > > Hey, > > > > > > I vaguely remember that we were seeing a bug like this some time ago. > > > We fixed /var/lib/glusterd to be writable (using tmpfs), but it can = > > > actually be that we need to persist those contents. > > > > > > But Simon, can you give details which configuration files are = > > > missing and why glusterd is not starting? > > Is glusterd starting? I'm getting the impression that it's starting, bu= t that it has no configuration. As far as I know, Gluster keeps most of the= configuration on the brick itself, but it finds brick information in /var/= lib/glusterd. > > = > > The last patch simply opened the firewall, and it's entirely possible t= hat we need to persist this. It may be a good idea to just persist the enti= re directory from the get-go, unless we want to try to have a thread watchi= ng /var/lib/glusterd for relevant files, but then we're stuck trying to kee= p up with what's happening with gluster itself... > > = > > Can we > > > > > > Thanks > > > fabian > > > > > >> Either way I suggest you take a look in the below link- = > > >> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_P > > >> er > > >> sisting_changes > > >> > > >> Let s know how it works. > > >> > > >> Doron > > > > > > > > = > > _______________________________________________ > > Users mailing list > > Users(a)ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > = > = --===============2257363532829620182==-- From fdeutsch at redhat.com Wed May 28 10:23:47 2014 Content-Type: multipart/mixed; boundary="===============4144716020506761883==" MIME-Version: 1.0 From: Fabian Deutsch To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Wed, 28 May 2014 16:23:42 +0200 Message-ID: <1401287022.2734.4.camel@fdeutsch-laptop.local> In-Reply-To: D86C48DF8800164BBE50B87623F7AC95483786B1@ln2-wio-001.dev.tradingscreen.com --===============4144716020506761883== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Am Mittwoch, den 28.05.2014, 14:22 +0000 schrieb Simon Barrett: > I just wasn't sure if I was missing something in the configuration to ena= ble this. > = > I'll stick with the workarounds I have for now and see how it goes. = > = > Thanks again. You are welcome! :) > Simon > = > -----Original Message----- > From: Fabian Deutsch [mailto:fdeutsch(a)redhat.com] = > Sent: 28 May 2014 15:20 > To: Simon Barrett > Cc: Ryan Barry; Doron Fediuck; users(a)ovirt.org > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node > = > Am Mittwoch, den 28.05.2014, 14:14 +0000 schrieb Simon Barrett: > > I did a "persist /var/lib/glusterd" and things are looking better. The = gluster config is now still in place after a reboot. > > = > > As a workaround to getting glusterd running on boot, I added "service g= lusterd start" to /etc/rc.local and ran persist /etc/rc.local. It appears t= o be working but feels like a bit of a hack. > > = > > Does anyone have any other suggestions as to the correct way to do this? > = > Hey Simon, > = > I was also investigating both the steps you did. And was also about to re= commend them :) They are more a workaround. > = > We basically need some patches to change the defaults on Node, to let glu= ster work out of the box. > = > This would include persisting the correct paths and enabling glusterd if = enabled. > = > - fabian > = > > Thanks, > > = > > Simon > > = > > -----Original Message----- > > From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On = > > Behalf Of Simon Barrett > > Sent: 28 May 2014 14:12 > > To: Ryan Barry; Fabian Deutsch; Doron Fediuck > > Cc: users(a)ovirt.org > > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt = > > node > > = > > Thanks for the replies. > > = > > I cannot get glusterd to start on boot and I lose all gluster config ev= ery reboot. = > > = > > The following shows what I did on the node to start glusterd, create a = volume etc, followed by the state of the node after a reboot. = > > = > > = > > [root(a)ovirt_node]# service glusterd status glusterd is stopped > > = > > [root(a)ovirt_node]# chkconfig --list glusterd > > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > > = > > [root(a)ovirt_node]# service glusterd start Starting glusterd:[ OK ] > > = > > gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore > > volume create: vmstore: success: please start the volume to access = > > data > > = > > gluster> vol start vmstore > > volume start: vmstore: success > > = > > gluster> vol info > > Volume Name: vmstore > > Type: Distribute > > Volume ID: 5bd01043-1352-4014-88ca-e632e264d088 > > Status: Started > > Number of Bricks: 1 > > Transport-type: tcp > > Bricks: > > Brick1: 10.22.8.46:/data/glusterfs/vmstore > > = > > [root(a)ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks node_= state.info trusted-vmstore-fuse.vol cksum = > > rbstate = > > vmstore.10.22.8.46.data-glusterfs-vmstore.vol > > info = > > run = > > vmstore-fuse.vol > > = > > [root(a)ovirt_node]# grep gluster /etc/rwtab.d/* > > /etc/rwtab.d/ovirt:files /var/lib/glusterd > > = > > [root(a)ovirt_node]# chkconfig glusterd on [root(a)ovirt_node]# chkconf= ig --list glusterd > > glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off > > = > > = > > #################################### > > I then reboot the node and see the following: > > #################################### > > = > > [root(a)ovirt_node]# service glusterd status glusterd is stopped > > = > > [root(a)ovirt_node]# chkconfig --list glusterd > > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > > = > > [root(a)ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0 > > = > > No more gluster volume configuration files. > > = > > I've taken a look through http://www.ovirt.org/Node_Troubleshooting#Mak= ing_changes_last_.2F_Persisting_changes but I'm unsure what needs to be don= e to persist this configuration. > > = > > To get glusterd to start on boot, do I need to manually persist /etc/rc= * files? > > = > > I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is thi= s a list of the files/dirs that should be persisted automatically? If so, i= s it recursive and should it include everything in /var/lib/glusterd/vols? > > = > > = > > TIA for any help with this. > > = > > Simon > > = > > = > > = > > -----Original Message----- > > From: Ryan Barry [mailto:rbarry(a)redhat.com] > > Sent: 27 May 2014 14:01 > > To: Fabian Deutsch; Doron Fediuck; Simon Barrett > > Cc: users(a)ovirt.org > > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt = > > node > > = > > On 05/26/2014 04:14 AM, Fabian Deutsch wrote: > > > Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: > > >> > > >> ----- Original Message ----- > > >>> From: "Simon Barrett" > > >>> To: users(a)ovirt.org > > >>> Sent: Friday, May 23, 2014 11:29:39 AM > > >>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt = > > >>> node > > >>> > > >>> > > >>> > > >>> I am working through the setup of oVirt node for a 3.4.1 deployment. > > >>> > > >>> > > >>> > > >>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor = > > >>> release 3.0.4 > > >>> (1.0.201401291204.el6) and created a storage domain. All was = > > >>> working OK until I rebooted the node and found that the glusterfs = > > >>> configuration had not been retained. > > >>> > > >>> > > >>> > > >>> Is there something I should be doing to persist any glusterfs = > > >>> configuration so it survives a node reboot? > > >>> > > >>> > > >>> > > >>> Many thanks, > > >>> > > >>> > > >>> > > >>> Simon > > >>> > > >> > > >> Hi Simon, > > >> it actually sounds like a bug to me, as node are supposed to = > > >> support gluster. > > >> > > >> Ryan / Fabian- thoughts? > > > > > > Hey, > > > > > > I vaguely remember that we were seeing a bug like this some time ago. > > > We fixed /var/lib/glusterd to be writable (using tmpfs), but it can = > > > actually be that we need to persist those contents. > > > > > > But Simon, can you give details which configuration files are = > > > missing and why glusterd is not starting? > > Is glusterd starting? I'm getting the impression that it's starting, bu= t that it has no configuration. As far as I know, Gluster keeps most of the= configuration on the brick itself, but it finds brick information in /var/= lib/glusterd. > > = > > The last patch simply opened the firewall, and it's entirely possible t= hat we need to persist this. It may be a good idea to just persist the enti= re directory from the get-go, unless we want to try to have a thread watchi= ng /var/lib/glusterd for relevant files, but then we're stuck trying to kee= p up with what's happening with gluster itself... > > = > > Can we > > > > > > Thanks > > > fabian > > > > > >> Either way I suggest you take a look in the below link- = > > >> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_P > > >> er > > >> sisting_changes > > >> > > >> Let s know how it works. > > >> > > >> Doron > > > > > > > > = > > _______________________________________________ > > Users mailing list > > Users(a)ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > = > = --===============4144716020506761883==-- From dfediuck at redhat.com Wed May 28 11:16:52 2014 Content-Type: multipart/mixed; boundary="===============2683032125864737668==" MIME-Version: 1.0 From: Doron Fediuck To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Wed, 28 May 2014 11:16:47 -0400 Message-ID: <989816671.16475342.1401290207716.JavaMail.zimbra@redhat.com> In-Reply-To: D86C48DF8800164BBE50B87623F7AC95483786B1@ln2-wio-001.dev.tradingscreen.com --===============2683032125864737668== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable ----- Original Message ----- > From: "Simon Barrett" > To: "Fabian Deutsch" > Cc: "Ryan Barry" , "Doron Fediuck" , users(a)ovirt.org > Sent: Wednesday, May 28, 2014 5:22:17 PM > Subject: RE: [ovirt-users] Persisting glusterfs configs on an oVirt node > = > I just wasn't sure if I was missing something in the configuration to ena= ble > this. > = > I'll stick with the workarounds I have for now and see how it goes. > = > Thanks again. > = > Simon > = > -----Original Message----- > From: Fabian Deutsch [mailto:fdeutsch(a)redhat.com] > Sent: 28 May 2014 15:20 > To: Simon Barrett > Cc: Ryan Barry; Doron Fediuck; users(a)ovirt.org > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node > = > Am Mittwoch, den 28.05.2014, 14:14 +0000 schrieb Simon Barrett: > > I did a "persist /var/lib/glusterd" and things are looking better. The > > gluster config is now still in place after a reboot. > > = > > As a workaround to getting glusterd running on boot, I added "service > > glusterd start" to /etc/rc.local and ran persist /etc/rc.local. It appe= ars > > to be working but feels like a bit of a hack. > > = > > Does anyone have any other suggestions as to the correct way to do this? > = > Hey Simon, > = > I was also investigating both the steps you did. And was also about to > recommend them :) They are more a workaround. > = > We basically need some patches to change the defaults on Node, to let glu= ster > work out of the box. > = > This would include persisting the correct paths and enabling glusterd if > enabled. > = > - fabian > = > > Thanks, > > = > > Simon > > = > > -----Original Message----- > > From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On > > Behalf Of Simon Barrett > > Sent: 28 May 2014 14:12 > > To: Ryan Barry; Fabian Deutsch; Doron Fediuck > > Cc: users(a)ovirt.org > > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt > > node > > = > > Thanks for the replies. > > = > > I cannot get glusterd to start on boot and I lose all gluster config ev= ery > > reboot. > > = > > The following shows what I did on the node to start glusterd, create a > > volume etc, followed by the state of the node after a reboot. > > = > > = > > [root(a)ovirt_node]# service glusterd status glusterd is stopped > > = > > [root(a)ovirt_node]# chkconfig --list glusterd > > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > > = > > [root(a)ovirt_node]# service glusterd start Starting glusterd:[ OK ] > > = > > gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore > > volume create: vmstore: success: please start the volume to access > > data > > = > > gluster> vol start vmstore > > volume start: vmstore: success > > = > > gluster> vol info > > Volume Name: vmstore > > Type: Distribute > > Volume ID: 5bd01043-1352-4014-88ca-e632e264d088 > > Status: Started > > Number of Bricks: 1 > > Transport-type: tcp > > Bricks: > > Brick1: 10.22.8.46:/data/glusterfs/vmstore > > = > > [root(a)ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks > > node_state.info trusted-vmstore-fuse.vol cksum > > rbstate > > vmstore.10.22.8.46.data-glusterfs-vmstore.vol > > info > > run > > vmstore-fuse.vol > > = > > [root(a)ovirt_node]# grep gluster /etc/rwtab.d/* > > /etc/rwtab.d/ovirt:files /var/lib/glusterd > > = > > [root(a)ovirt_node]# chkconfig glusterd on [root(a)ovirt_node]# chkconf= ig > > --list glusterd > > glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off > > = > > = > > #################################### > > I then reboot the node and see the following: > > #################################### > > = > > [root(a)ovirt_node]# service glusterd status glusterd is stopped > > = > > [root(a)ovirt_node]# chkconfig --list glusterd > > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > > = > > [root(a)ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0 > > = > > No more gluster volume configuration files. > > = > > I've taken a look through > > http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Persi= sting_changes > > but I'm unsure what needs to be done to persist this configuration. > > = > > To get glusterd to start on boot, do I need to manually persist /etc/rc* > > files? > > = > > I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is thi= s a > > list of the files/dirs that should be persisted automatically? If so, is > > it recursive and should it include everything in /var/lib/glusterd/vols? > > = > > = > > TIA for any help with this. > > = > > Simon > > = > > = > > = > > -----Original Message----- > > From: Ryan Barry [mailto:rbarry(a)redhat.com] > > Sent: 27 May 2014 14:01 > > To: Fabian Deutsch; Doron Fediuck; Simon Barrett > > Cc: users(a)ovirt.org > > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt > > node > > = > > On 05/26/2014 04:14 AM, Fabian Deutsch wrote: > > > Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: > > >> > > >> ----- Original Message ----- > > >>> From: "Simon Barrett" > > >>> To: users(a)ovirt.org > > >>> Sent: Friday, May 23, 2014 11:29:39 AM > > >>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt > > >>> node > > >>> > > >>> > > >>> > > >>> I am working through the setup of oVirt node for a 3.4.1 deployment. > > >>> > > >>> > > >>> > > >>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor > > >>> release 3.0.4 > > >>> (1.0.201401291204.el6) and created a storage domain. All was > > >>> working OK until I rebooted the node and found that the glusterfs > > >>> configuration had not been retained. > > >>> > > >>> > > >>> > > >>> Is there something I should be doing to persist any glusterfs > > >>> configuration so it survives a node reboot? > > >>> > > >>> > > >>> > > >>> Many thanks, > > >>> > > >>> > > >>> > > >>> Simon > > >>> > > >> > > >> Hi Simon, > > >> it actually sounds like a bug to me, as node are supposed to > > >> support gluster. > > >> > > >> Ryan / Fabian- thoughts? > > > > > > Hey, > > > > > > I vaguely remember that we were seeing a bug like this some time ago. > > > We fixed /var/lib/glusterd to be writable (using tmpfs), but it can > > > actually be that we need to persist those contents. > > > > > > But Simon, can you give details which configuration files are > > > missing and why glusterd is not starting? > > Is glusterd starting? I'm getting the impression that it's starting, but > > that it has no configuration. As far as I know, Gluster keeps most of t= he > > configuration on the brick itself, but it finds brick information in > > /var/lib/glusterd. > > = > > The last patch simply opened the firewall, and it's entirely possible t= hat > > we need to persist this. It may be a good idea to just persist the enti= re > > directory from the get-go, unless we want to try to have a thread watch= ing > > /var/lib/glusterd for relevant files, but then we're stuck trying to ke= ep > > up with what's happening with gluster itself... > > = > > Can we > > > > > > Thanks > > > fabian > > > > > >> Either way I suggest you take a look in the below link- > > >> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_P > > >> er > > >> sisting_changes > > >> > > >> Let s know how it works. > > >> > > >> Doron > > > > > > I suggest you open a bug so we'll be able to track this issue properly rather than using hacks. Thanks, Doron --===============2683032125864737668==-- From Simon.Barrett at tradingscreen.com Wed May 28 11:18:25 2014 Content-Type: multipart/mixed; boundary="===============2683053053966818361==" MIME-Version: 1.0 From: Simon Barrett To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Wed, 28 May 2014 15:18:01 +0000 Message-ID: In-Reply-To: 989816671.16475342.1401290207716.JavaMail.zimbra@redhat.com --===============2683053053966818361== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Will do. Thanks -----Original Message----- From: Doron Fediuck [mailto:dfediuck(a)redhat.com] = Sent: 28 May 2014 16:17 To: Simon Barrett Cc: Fabian Deutsch; Ryan Barry; users(a)ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node ----- Original Message ----- > From: "Simon Barrett" > To: "Fabian Deutsch" > Cc: "Ryan Barry" , "Doron Fediuck" = > , users(a)ovirt.org > Sent: Wednesday, May 28, 2014 5:22:17 PM > Subject: RE: [ovirt-users] Persisting glusterfs configs on an oVirt = > node > = > I just wasn't sure if I was missing something in the configuration to = > enable this. > = > I'll stick with the workarounds I have for now and see how it goes. > = > Thanks again. > = > Simon > = > -----Original Message----- > From: Fabian Deutsch [mailto:fdeutsch(a)redhat.com] > Sent: 28 May 2014 15:20 > To: Simon Barrett > Cc: Ryan Barry; Doron Fediuck; users(a)ovirt.org > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt = > node > = > Am Mittwoch, den 28.05.2014, 14:14 +0000 schrieb Simon Barrett: > > I did a "persist /var/lib/glusterd" and things are looking better. = > > The gluster config is now still in place after a reboot. > > = > > As a workaround to getting glusterd running on boot, I added = > > "service glusterd start" to /etc/rc.local and ran persist = > > /etc/rc.local. It appears to be working but feels like a bit of a hack. > > = > > Does anyone have any other suggestions as to the correct way to do this? > = > Hey Simon, > = > I was also investigating both the steps you did. And was also about to = > recommend them :) They are more a workaround. > = > We basically need some patches to change the defaults on Node, to let = > gluster work out of the box. > = > This would include persisting the correct paths and enabling glusterd = > if enabled. > = > - fabian > = > > Thanks, > > = > > Simon > > = > > -----Original Message----- > > From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On = > > Behalf Of Simon Barrett > > Sent: 28 May 2014 14:12 > > To: Ryan Barry; Fabian Deutsch; Doron Fediuck > > Cc: users(a)ovirt.org > > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt = > > node > > = > > Thanks for the replies. > > = > > I cannot get glusterd to start on boot and I lose all gluster config = > > every reboot. > > = > > The following shows what I did on the node to start glusterd, create = > > a volume etc, followed by the state of the node after a reboot. > > = > > = > > [root(a)ovirt_node]# service glusterd status glusterd is stopped > > = > > [root(a)ovirt_node]# chkconfig --list glusterd > > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > > = > > [root(a)ovirt_node]# service glusterd start Starting glusterd:[ OK ] > > = > > gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore > > volume create: vmstore: success: please start the volume to access = > > data > > = > > gluster> vol start vmstore > > volume start: vmstore: success > > = > > gluster> vol info > > Volume Name: vmstore > > Type: Distribute > > Volume ID: 5bd01043-1352-4014-88ca-e632e264d088 > > Status: Started > > Number of Bricks: 1 > > Transport-type: tcp > > Bricks: > > Brick1: 10.22.8.46:/data/glusterfs/vmstore > > = > > [root(a)ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks = > > node_state.info trusted-vmstore-fuse.vol cksum rbstate = > > vmstore.10.22.8.46.data-glusterfs-vmstore.vol > > info > > run > > vmstore-fuse.vol > > = > > [root(a)ovirt_node]# grep gluster /etc/rwtab.d/* > > /etc/rwtab.d/ovirt:files /var/lib/glusterd > > = > > [root(a)ovirt_node]# chkconfig glusterd on [root(a)ovirt_node]# = > > chkconfig --list glusterd > > glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off > > = > > = > > #################################### > > I then reboot the node and see the following: > > #################################### > > = > > [root(a)ovirt_node]# service glusterd status glusterd is stopped > > = > > [root(a)ovirt_node]# chkconfig --list glusterd > > glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off > > = > > [root(a)ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0 > > = > > No more gluster volume configuration files. > > = > > I've taken a look through > > http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_Pe > > rsisting_changes but I'm unsure what needs to be done to persist = > > this configuration. > > = > > To get glusterd to start on boot, do I need to manually persist = > > /etc/rc* files? > > = > > I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is = > > this a list of the files/dirs that should be persisted = > > automatically? If so, is it recursive and should it include everything = in /var/lib/glusterd/vols? > > = > > = > > TIA for any help with this. > > = > > Simon > > = > > = > > = > > -----Original Message----- > > From: Ryan Barry [mailto:rbarry(a)redhat.com] > > Sent: 27 May 2014 14:01 > > To: Fabian Deutsch; Doron Fediuck; Simon Barrett > > Cc: users(a)ovirt.org > > Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt = > > node > > = > > On 05/26/2014 04:14 AM, Fabian Deutsch wrote: > > > Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: > > >> > > >> ----- Original Message ----- > > >>> From: "Simon Barrett" > > >>> To: users(a)ovirt.org > > >>> Sent: Friday, May 23, 2014 11:29:39 AM > > >>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt = > > >>> node > > >>> > > >>> > > >>> > > >>> I am working through the setup of oVirt node for a 3.4.1 deployment. > > >>> > > >>> > > >>> > > >>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor = > > >>> release 3.0.4 > > >>> (1.0.201401291204.el6) and created a storage domain. All was = > > >>> working OK until I rebooted the node and found that the = > > >>> glusterfs configuration had not been retained. > > >>> > > >>> > > >>> > > >>> Is there something I should be doing to persist any glusterfs = > > >>> configuration so it survives a node reboot? > > >>> > > >>> > > >>> > > >>> Many thanks, > > >>> > > >>> > > >>> > > >>> Simon > > >>> > > >> > > >> Hi Simon, > > >> it actually sounds like a bug to me, as node are supposed to = > > >> support gluster. > > >> > > >> Ryan / Fabian- thoughts? > > > > > > Hey, > > > > > > I vaguely remember that we were seeing a bug like this some time ago. > > > We fixed /var/lib/glusterd to be writable (using tmpfs), but it = > > > can actually be that we need to persist those contents. > > > > > > But Simon, can you give details which configuration files are = > > > missing and why glusterd is not starting? > > Is glusterd starting? I'm getting the impression that it's starting, = > > but that it has no configuration. As far as I know, Gluster keeps = > > most of the configuration on the brick itself, but it finds brick = > > information in /var/lib/glusterd. > > = > > The last patch simply opened the firewall, and it's entirely = > > possible that we need to persist this. It may be a good idea to just = > > persist the entire directory from the get-go, unless we want to try = > > to have a thread watching /var/lib/glusterd for relevant files, but = > > then we're stuck trying to keep up with what's happening with gluster i= tself... > > = > > Can we > > > > > > Thanks > > > fabian > > > > > >> Either way I suggest you take a look in the below link- = > > >> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F > > >> _P > > >> er > > >> sisting_changes > > >> > > >> Let s know how it works. > > >> > > >> Doron > > > > > > I suggest you open a bug so we'll be able to track this issue properly rath= er than using hacks. Thanks, Doron --===============2683053053966818361==-- From rbarry at redhat.com Wed May 28 11:20:22 2014 Content-Type: multipart/mixed; boundary="===============8530542608901980399==" MIME-Version: 1.0 From: Ryan Barry To: users at ovirt.org Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node Date: Wed, 28 May 2014 11:20:20 -0400 Message-ID: <5385FEB4.3090005@redhat.com> In-Reply-To: 1401287022.2734.4.camel@fdeutsch-laptop.local --===============8530542608901980399== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 05/28/2014 10:23 AM, Fabian Deutsch wrote: > Am Mittwoch, den 28.05.2014, 14:22 +0000 schrieb Simon Barrett: >> I just wasn't sure if I was missing something in the configuration to en= able this. >> >> I'll stick with the workarounds I have for now and see how it goes. >> >> Thanks again. > > You are welcome! :) > >> Simon >> >> -----Original Message----- >> From: Fabian Deutsch [mailto:fdeutsch(a)redhat.com] >> Sent: 28 May 2014 15:20 >> To: Simon Barrett >> Cc: Ryan Barry; Doron Fediuck; users(a)ovirt.org >> Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt node >> >> Am Mittwoch, den 28.05.2014, 14:14 +0000 schrieb Simon Barrett: >>> I did a "persist /var/lib/glusterd" and things are looking better. The = gluster config is now still in place after a reboot. >>> >>> As a workaround to getting glusterd running on boot, I added "service g= lusterd start" to /etc/rc.local and ran persist /etc/rc.local. It appears t= o be working but feels like a bit of a hack. >>> >>> Does anyone have any other suggestions as to the correct way to do this? >> >> Hey Simon, >> >> I was also investigating both the steps you did. And was also about to r= ecommend them :) They are more a workaround. >> >> We basically need some patches to change the defaults on Node, to let gl= uster work out of the box. >> >> This would include persisting the correct paths and enabling glusterd if= enabled. >> >> - fabian >> >>> Thanks, >>> >>> Simon >>> >>> -----Original Message----- >>> From: users-bounces(a)ovirt.org [mailto:users-bounces(a)ovirt.org] On >>> Behalf Of Simon Barrett >>> Sent: 28 May 2014 14:12 >>> To: Ryan Barry; Fabian Deutsch; Doron Fediuck >>> Cc: users(a)ovirt.org >>> Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt >>> node >>> >>> Thanks for the replies. >>> >>> I cannot get glusterd to start on boot and I lose all gluster config ev= ery reboot. >>> >>> The following shows what I did on the node to start glusterd, create a = volume etc, followed by the state of the node after a reboot. >>> >>> >>> [root(a)ovirt_node]# service glusterd status glusterd is stopped >>> >>> [root(a)ovirt_node]# chkconfig --list glusterd >>> glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off >>> >>> [root(a)ovirt_node]# service glusterd start Starting glusterd:[ OK ] >>> >>> gluster> volume create vmstore 10.22.8.46:/data/glusterfs/vmstore >>> volume create: vmstore: success: please start the volume to access >>> data >>> >>> gluster> vol start vmstore >>> volume start: vmstore: success >>> >>> gluster> vol info >>> Volume Name: vmstore >>> Type: Distribute >>> Volume ID: 5bd01043-1352-4014-88ca-e632e264d088 >>> Status: Started >>> Number of Bricks: 1 >>> Transport-type: tcp >>> Bricks: >>> Brick1: 10.22.8.46:/data/glusterfs/vmstore >>> >>> [root(a)ovirt_node]# ls -1 /var/lib/glusterd/vols/vmstore/ bricks node_= state.info trusted-vmstore-fuse.vol cksum >>> rbstate >>> vmstore.10.22.8.46.data-glusterfs-vmstore.vol >>> info >>> run >>> vmstore-fuse.vol >>> >>> [root(a)ovirt_node]# grep gluster /etc/rwtab.d/* >>> /etc/rwtab.d/ovirt:files /var/lib/glusterd >>> >>> [root(a)ovirt_node]# chkconfig glusterd on [root(a)ovirt_node]# chkconf= ig --list glusterd >>> glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off >>> >>> >>> #################################### >>> I then reboot the node and see the following: >>> #################################### >>> >>> [root(a)ovirt_node]# service glusterd status glusterd is stopped >>> >>> [root(a)ovirt_node]# chkconfig --list glusterd >>> glusterd 0:off 1:off 2:off 3:off 4:off 5:off 6:off >>> >>> [root(a)ovirt_node]# ls -l /var/lib/glusterd/vols/ total 0 I believe that we intentionally do not start glusterd, since glusterfsd = is all that's required for the engine to manage volumes, but I could be = mis-remembering this, and I don't have any real arguments to starting = glusterd at boot unless somebody speaks up against it. >>> >>> No more gluster volume configuration files. >>> >>> I've taken a look through http://www.ovirt.org/Node_Troubleshooting#Mak= ing_changes_last_.2F_Persisting_changes but I'm unsure what needs to be don= e to persist this configuration. >>> >>> To get glusterd to start on boot, do I need to manually persist /etc/rc= * files? >>> >>> I see "files /var/lib/glusterd" mentioned in /etc/rwtab.d/ovirt. Is thi= s a list of the files/dirs that should be persisted automatically? If so, i= s it recursive and should it include everything in /var/lib/glusterd/vols? rwtab is a mechanism from readonly-root, which walks through the = filesystem and says "copy these files to = /var/lib/stateless/writable/${path} and bind mount them back in their = original location. So you can write files there, but they don't survive = reboots on Node. Since Node is booting from the same ramdisk every time (essentially the = ISO copied to the hard drive), this mechanism doesn't really work for = us, and persistence is a different mechanism entirely. >>> >>> TIA for any help with this. >>> >>> Simon >>> >>> >>> >>> -----Original Message----- >>> From: Ryan Barry [mailto:rbarry(a)redhat.com] >>> Sent: 27 May 2014 14:01 >>> To: Fabian Deutsch; Doron Fediuck; Simon Barrett >>> Cc: users(a)ovirt.org >>> Subject: Re: [ovirt-users] Persisting glusterfs configs on an oVirt >>> node >>> >>> On 05/26/2014 04:14 AM, Fabian Deutsch wrote: >>>> Am Sonntag, den 25.05.2014, 08:18 -0400 schrieb Doron Fediuck: >>>>> >>>>> ----- Original Message ----- >>>>>> From: "Simon Barrett" >>>>>> To: users(a)ovirt.org >>>>>> Sent: Friday, May 23, 2014 11:29:39 AM >>>>>> Subject: [ovirt-users] Persisting glusterfs configs on an oVirt >>>>>> node >>>>>> >>>>>> >>>>>> >>>>>> I am working through the setup of oVirt node for a 3.4.1 deployment. >>>>>> >>>>>> >>>>>> >>>>>> I setup some glusterfs volumes/bricks on oVirt Node Hypervisor >>>>>> release 3.0.4 >>>>>> (1.0.201401291204.el6) and created a storage domain. All was >>>>>> working OK until I rebooted the node and found that the glusterfs >>>>>> configuration had not been retained. >>>>>> >>>>>> >>>>>> >>>>>> Is there something I should be doing to persist any glusterfs >>>>>> configuration so it survives a node reboot? >>>>>> >>>>>> >>>>>> >>>>>> Many thanks, >>>>>> >>>>>> >>>>>> >>>>>> Simon >>>>>> >>>>> >>>>> Hi Simon, >>>>> it actually sounds like a bug to me, as node are supposed to >>>>> support gluster. >>>>> >>>>> Ryan / Fabian- thoughts? >>>> >>>> Hey, >>>> >>>> I vaguely remember that we were seeing a bug like this some time ago. >>>> We fixed /var/lib/glusterd to be writable (using tmpfs), but it can >>>> actually be that we need to persist those contents. >>>> >>>> But Simon, can you give details which configuration files are >>>> missing and why glusterd is not starting? >>> Is glusterd starting? I'm getting the impression that it's starting, bu= t that it has no configuration. As far as I know, Gluster keeps most of the= configuration on the brick itself, but it finds brick information in /var/= lib/glusterd. >>> >>> The last patch simply opened the firewall, and it's entirely possible t= hat we need to persist this. It may be a good idea to just persist the enti= re directory from the get-go, unless we want to try to have a thread watchi= ng /var/lib/glusterd for relevant files, but then we're stuck trying to kee= p up with what's happening with gluster itself... >>> >>> Can we >>>> >>>> Thanks >>>> fabian >>>> >>>>> Either way I suggest you take a look in the below link- >>>>> http://www.ovirt.org/Node_Troubleshooting#Making_changes_last_.2F_P >>>>> er >>>>> sisting_changes >>>>> >>>>> Let s know how it works. >>>>> >>>>> Doron >>>> >>>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> > > --===============8530542608901980399==--