[ovirt-users] Upgrading HC from 4.0 to 4.1
Doug Ingham
dougti at gmail.com
Sat Jul 1 18:51:25 UTC 2017
> Only problem I would like to manage is that I have gluster network shared
> with ovirtmgmt one.
> Can I move it now with these updated packages?
>
Are the gluster peers configured with the same hostnames/IPs as your hosts
within oVirt?
Once they're configured on the same network, separating them might be a bit
difficult. Also, the last time I looked, oVirt still doesn't support
managing HCI oVirt/Gluster nodes running each service on a different
interface (see below).
In theory, the procedure would involve stopping all of the Gluster
processes on all of the peers, updating the peer addresses in the gluster
configs on all of the nodes, then restarting glusterd & the bricks. I've
not tested this however, and it's not a "supported" procedure. I've no idea
how oVirt would deal with these changes either.
To properly separate my own storage & management networks from the
beginning, I configured each host with 2 IPs on different subnets and a
different hostname corresponding to each IP. For example, "v0" points to
the management interface of the first node, and "s0" points to the storage
interface.
oVirt's problem is that, whilst it can see the pre-configured bricks and
volumes on each host, it can't create any new bricks or volumes because it
wants to use the same hostnames it uses to manage the hosts. It also means
that it can't fence the hosts correctly, as it doesn't understand that "v0"
& "s0" are the same host.
This isn't a problem for me though, as I don't need to manage my Gluster
instances via the GUI, and automatic fencing can be done via the IPMI
interfaces.
Last I read, this is a recognised problem, but a fix isn't expect to arrive
any time soon.
--
Doug
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170701/dfe3a74d/attachment.html>
More information about the Users
mailing list