On Tue, Jul 4, 2017 at 3:18 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:


On Mon, Jul 3, 2017 at 12:48 PM, Sahina Bose <sabose@redhat.com> wrote:


On Sun, Jul 2, 2017 at 12:21 AM, Doug Ingham <dougti@gmail.com> wrote:

Only problem I would like to manage is that I have gluster network shared with ovirtmgmt one.
Can I move it now with these updated packages?

Are the gluster peers configured with the same hostnames/IPs as your hosts within oVirt?

Once they're configured on the same network, separating them might be a bit difficult. Also, the last time I looked, oVirt still doesn't support managing HCI oVirt/Gluster nodes running each service on a different interface (see below).

In theory, the procedure would involve stopping all of the Gluster processes on all of the peers, updating the peer addresses in the gluster configs on all of the nodes, then restarting glusterd & the bricks. I've not tested this however, and it's not a "supported" procedure. I've no idea how oVirt would deal with these changes either.

Which version of glusterfs do you have running now? With glusterfs>= 3.9, there's a reset-brick command that can help you do this.

At this moment on my oVirt nodes I have gluster packages as provided by 4.1.2 repos, so:

glusterfs-3.8.13-1.el7.x86_64
glusterfs-api-3.8.13-1.el7.x86_64
glusterfs-cli-3.8.13-1.el7.x86_64
glusterfs-client-xlators-3.8.13-1.el7.x86_64
glusterfs-fuse-3.8.13-1.el7.x86_64
glusterfs-geo-replication-3.8.13-1.el7.x86_64
glusterfs-libs-3.8.13-1.el7.x86_64
glusterfs-server-3.8.13-1.el7.x86_64
vdsm-gluster-4.19.15-1.el7.centos.noarch

Is 3.9 version of Gluster packages provided when updating to upcoming 4.1.3, perhaps?

No, it's not. One option is to update glusterfs packages to 3.10.
 

 

It's possible to move to the new interface for gluster.

The procedure would be:

1. Create a network with "gluster" network role.
2. On each host, use "Setup networks" to associate the gluster network on the desired interface. (This would ensure thet the engine will peer probe this interface's IP address as well, so that it can be used to identify the host in brick defintion)
3. For each of the volume's bricks - change the definition of the brick, so that the new ip address is used. Ensure that there's no pending heal (i.e gluster volume heal info - should list 0 entires) before you start this(see https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ - Introducing reset-brick command)
gluster volume reset-brick VOLNAME <original-hostname/ip>:BRICKPATH start
gluster volume reset-brick VOLNAME <original-hostname/ip>:BRICKPATH <new-gluster-interface-ip>:BRICKPATH commit force




So do you think I can use any other commands with oVirt 4.1.2 and gluster 3.8?
Can I safely proceed with steps 1 and 2? When I setup a gluster network and associated it to one host, what are exactly the implications? Will I disrupt anything, or is it seen only an option for having gluster traffing going on...?

Steps 1 & 2 will ensure that the IP address associated with the gluster network is peer probed. It does not ensure that brick communication happens using that interface. This happens only when the brick is identified using that IP as well. (Step 3)
 

BTW: How would I complete the webadmin gui part of step 3? I don't see an "edit" brick funcionality; I only see "Add" and "Replace Brick"...

There's an RFE open to add this to GUI. For now, this has to be done from command line of one of the gluster nodes.
 
 
Thanks,
Gianluca