[ovirt-users] Node network setup

spfma.tech at e.mail.fr spfma.tech at e.mail.fr
Tue Jan 30 19:07:26 UTC 2018


Hi,  I am trying to setup a cluster of two nodes, with self hoste Engine. Things went fine for the first machine, but it as rather messy about the second one. I would like to have load balancing and failover for both management network and storage (NFS repository).   So what should I exactly do to get a working network stack which can be recognized when I try to add this host to the cluster ?   Have tried configuring bonds and briges using Cockpit, using manual "ifcfg" files, but all the time I see the bridges and the bonds not linked in the Engine interface, so the new host cannot be enrolled. If I try to link "ovirtmgmt" to the the associated bond, I have a connectivity loss because it is the management device, and I have te restart the network services. As management configuration is not OK, I can't setup the storage connection.   And if I just try to activate the host, I will install and configure things and then complain about missing "ovirtmgmt" and "nfs" networks, which both exist and work and Centos level.   The interface, bonds and bridge names are copy/paste from the first server.   # brctl show ovirtmgmt
bridge name bridge id STP enabled interfaces
ovirtmgmt 8000.44a842394200 no bond0 # ip addr show bond0
33: bond0:  mtu 1500 qdisc noqueue master ovirtmgmt state UP qlen 1000
 link/ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff
 inet6 fe80::46a8:42ff:fe39:4200/64 scope link 
 valid_lft forever preferred_lft forever
# ip addr show em1
2: em1:  mtu 1500 qdisc mq master bond0 state UP qlen 1000
 link/ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff
# ip addr show em3
4: em3:  mtu 1500 qdisc mq master bond0 state UP qlen 1000
 link/ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff   By the way, is it mandatory to stop and disable NetworkManager or not ?   Thanks for any kind of help :-)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180130/4bdc5cd7/attachment.html>


More information about the Users mailing list