On Sat, Jul 1, 2017 at 8:51 PM, Doug Ingham <dougti(a)gmail.com> wrote:
Only problem I would like to manage is that I have gluster network shared
> with ovirtmgmt one.
> Can I move it now with these updated packages?
>
Are the gluster peers configured with the same hostnames/IPs as your hosts
within oVirt?
Yes.
From gluster point of view:
[root@ovirt01 ~]# gluster peer status
Number of Peers: 2
Hostname: ovirt03.localdomain.local
Uuid: ec81a04c-a19c-4d31-9d82-7543cefe79f3
State: Peer in Cluster (Connected)
Hostname: ovirt02.localdomain.local
Uuid: b89311fe-257f-4e44-8e15-9bff6245d689
State: Peer in Cluster (Connected)
[root@ovirt01 ~]#
[root@ovirt02 ~]# gluster peer status
Number of Peers: 2
Hostname: ovirt03.localdomain.local
Uuid: ec81a04c-a19c-4d31-9d82-7543cefe79f3
State: Peer in Cluster (Connected)
Hostname: ovirt01.localdomain.local
Uuid: e9717281-a356-42aa-a579-a4647a29a0bc
State: Peer in Cluster (Connected)
In oVirt, the hosts are defined with Hostname/IP field as
ovirt01.localdomain.local
192.168.150.103
ovirt03.localdomain.local
I don't remember why for the second host I used its ip instead of its
hostname... possily I used the ip for test when adding it because I want to
crosscheck the "Name" and the "Hostname/IP" columns of Hosts tab (the
host
from which I executed gdeploy was added with "Name" hosted_engine_1; I see
that field is editable... but not the "Hostname/IP" one obviously)
The node from which I initially executed the gdeploy job in 4.0.5 was
ovirt01.localdomain.local
BTW: during upgrade of hosts there was an errore with ansible19
Error: ansible1.9 conflicts with ansible-2.3.1.0-1.el7.noarch
So the solution for updating nodes after enabling ovirt 4.1 repo was:
yum remove ansible gdeploy
yum install ansible gdeploy
yum update
Probably the ansible and gdeploy packages are not neede any more after
initial deploy though. But they can come useful on hand in case of
maintenance of config files.
Once they're configured on the same network, separating them
might be a
bit difficult. Also, the last time I looked, oVirt still doesn't support
managing HCI oVirt/Gluster nodes running each service on a different
interface (see below).
In theory, the procedure would involve stopping all of the Gluster
processes on all of the peers, updating the peer addresses in the gluster
configs on all of the nodes, then restarting glusterd & the bricks. I've
not tested this however, and it's not a "supported" procedure. I've no
idea
how oVirt would deal with these changes either.
I could try, creating a snapshot before, as the oVirt hosts are vsphere VMs
themselves.
But what about this new network configuration inside oVirt? Should I
configure it as gluster network within cluster "logical networks" tab with
an ip for each host configured within oVirt, or should I set the network as
unmanaged by oVirt at all?
To properly separate my own storage & management networks from the
beginning, I configured each host with 2 IPs on different subnets and a
different hostname corresponding to each IP. For example, "v0" points to
the management interface of the first node, and "s0" points to the storage
interface.
Do you have the gluster dedicated network configured as "gluster network"
in oVirt?
Gianluca