On Sun, Jul 2, 2017 at 2:08 AM, Mike DePaulo <mikedep333@gmail.com> wrote:
Hi everyone,

I have ovirt 4.1.1/4.1.2 running on 3 hosts with a gluster hosted engine.

I was working on setting up a network for gluster storage and
migration. The addresses for it will be 10.0.20.x, rather than
192.168.1.x for the management network.  However, I switched gluster
storage and migration back over to the management network.

I updated and rebooted one of my hosts (death-star, 10.0.20.52) and on
reboot, the glusterd service would start, but wouldn't seem to work.
The engine webgui reported that its bricks were down, and commands
like this would fail:

[root@death-star glusterfs]# gluster pool list
pool list: failed
[root@death-star glusterfs]# gluster peer status
peer status: failed

Upon further investigation, I had under /var/lib/glusterd/peers/ the 2
existing UUID files, plus a new 3rd one:
[root@death-star peers]# cat 10.0.20.53
uuid=00000000-0000-0000-0000-000000000000
state=0
hostname1=10.0.20.53

I moved that file out of there, restarted glusterd, and now gluster is
working again.

I am guessing that this is a bug. Let me know if I should attach other
log files; I am not sure which ones.

And yes, 10.0.20.53 is the IP of one of the other hosts.

-Mike

Hello,
I'm trying to accomplish the same.
See also comments here at my answer today:
http://lists.ovirt.org/pipermail/users/2017-July/082990.html

So at the end you rollback?

Can you list with detail what were your modifications and operating steps with hosts, before trying to restart with the new network config?
Did you try to set the new network as gluster role in oVirt?

I'm using 4 volumes at the moment: data, engine, iso, export and based on some analysis that I'm doing right now, one should modify at least these files for each vol_name accordingly under /var/lib/glusterd on the 3 hosts:

./vols/vol_name/info
./vols/vol_name/bricks/ovirt01.localdomain.local:-gluster-brick1-engine
./vols/vol_name/bricks/ovirt02.localdomain.local:-gluster-brick1-engine
./vols/vol_name/bricks/ovirt03.localdomain.local:-gluster-brick1-engine
./vols/vol_name/trusted-engine.tcp-fuse.vol
./vols/vol_name/engine.tcp-fuse.vol

plus one has to rename 3 of these files themselves. Suppose hostnames are ovirtN.localdomain.local and that you decide to assign hostname glovirtN.localdomain.local to the interfaces on the new gluster network, they should become:

./vols/vol_name/bricks/glovirt01.localdomain.local:-gluster-brick1-engine
./vols/vol_name/bricks/glovirt02.localdomain.local:-gluster-brick1-engine
./vols/vol_name/bricks/glovirt03.localdomain.local:-gluster-brick1-engine


And also change these files on each node (with related uid files of the other two nodes)s:

./peers/ec81a04c-a19c-4d31-9d82-7543cefe79f3
./peers/e9717281-a356-42aa-a579-a4647a29a0bc
./glustershd/glustershd-server.vol

I see no problems for the migration network chenage though. I did it, changing role in check box under Cluster --> Default --> Logical Networks subpane --> Manage Networks
You have to assign an ip to the interface for every host in Hosts --> Network Interfaces --> Setup Host Networks

Gianluca