
--=_40d9d471a95f6be5f29c7f93c9060b7c Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi, I am trying to setup a cluster of two nodes, with self hoste Engine= . Things went fine for the first machine, but it as rather messy about t= he second one. I would like to have load balancing and failover for both= management network and storage (NFS repository). So what should I exa= ctly do to get a working network stack which can be recognized when I tr= y to add this host to the cluster ? Have tried configuring bonds and b= riges using Cockpit, using manual "ifcfg" files, but all the time I see= the bridges and the bonds not linked in the Engine interface, so the ne= w host cannot be enrolled. If I try to link "ovirtmgmt" to the the assoc= iated bond, I have a connectivity loss because it is the management devi= ce, and I have te restart the network services. As management configurat= ion is not OK, I can't setup the storage connection. And if I just try= to activate the host, I will install and configure things and then comp= lain about missing "ovirtmgmt" and "nfs" networks, which both exist and= work and Centos level. The interface, bonds and bridge names are copy= /paste from the first server. # brctl show ovirtmgmt=0Abridge name bri= dge id STP enabled interfaces=0Aovirtmgmt 8000.44a842394200 no bond0 # i= p addr show bond0=0A33: bond0: mtu 1500 qdisc noqueue master ovirtmgmt= state UP qlen 1000=0A link/ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:f= f=0A inet6 fe80::46a8:42ff:fe39:4200/64 scope link =0A valid_lft forever= preferred_lft forever=0A# ip addr show em1=0A2: em1: mtu 1500 qdisc mq= master bond0 state UP qlen 1000=0A link/ether 44:a8:42:39:42:00 brd ff:= ff:ff:ff:ff:ff=0A# ip addr show em3=0A4: em3: mtu 1500 qdisc mq master= bond0 state UP qlen 1000=0A link/ether 44:a8:42:39:42:00 brd ff:ff:ff:f= f:ff:ff By the way, is it mandatory to stop and disable NetworkManager= or not ? Thanks for any kind of help :-) --=_40d9d471a95f6be5f29c7f93c9060b7c Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable <div>Hi,</div>=0A<div>=0A<div>I am trying to setup a cluster of two node= s, with self hoste Engine.</div>=0A<div>Things went fine for the first m= achine, but it as rather messy about the second one.</div>=0A<div>I woul= d like to have load balancing and failover for both management network a= nd storage (NFS repository).</div>=0A<div> </div>=0A<div>So what sh= ould I exactly do to get a working network stack which can be recognized= when I try to add this host to the cluster ?</div>=0A<div> </div>= =0A<div>Have tried configuring bonds and briges using Cockpit, using man= ual "ifcfg" files, but all the time I see the bridges and the bonds not= linked in the Engine interface, so the new host cannot be enrolled.</di= v>=0A<div>If I try to link "ovirtmgmt" to the the associated bond, I hav= e a connectivity loss because it is the management device, and I have te= restart the network services. As management configuration is not OK, I= can't setup the storage connection.</div>=0A<div> </div>=0A<div>An= d if I just try to activate the host, I will install and configure thing= s and then complain about missing "ovirtmgmt" and "nfs" networks, which= both exist and work and Centos level.</div>=0A<div> </div>=0A<div>= The interface, bonds and bridge names are copy/paste from the first serv= er.</div>=0A<div> </div>=0A<div># brctl show ovirtmgmt<br />bridge= name bridge id S= TP enabled interfaces<br />ovirtmgmt  = ; 8000.44a842394200 no &= nbsp; bond0</div>=0A<div># ip addr show bond0<br />33:= bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc no= queue master ovirtmgmt state UP qlen 1000<br /> link/e= ther 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff<br /> ine= t6 fe80::46a8:42ff:fe39:4200/64 scope link <br />  = ; valid_lft forever preferred_lft forever<br /># ip addr sho= w em1<br />2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 150= 0 qdisc mq master bond0 state UP qlen 1000<br /> link/= ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff<br /># ip addr show em3<br= />4: em3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc= mq master bond0 state UP qlen 1000<br /> link/ether 4= 4:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff</div>=0A<div> </div>=0A<div>= By the way, is it mandatory to stop and disable NetworkManager or not ?<= /div>=0A<div> </div>=0A<div>Thanks for any kind of help :-)</div>= =0A</div> --=_40d9d471a95f6be5f29c7f93c9060b7c--

It is not clear to me what you are attempting to do exactly, but networking settings should be handled through the setup networks window on Engine. (Network->Hosts->->[specific host]->Interface tab -> SetupNetwork You can then define bonds by dragging the nics one over the other. Thanks, Edy. On Tue, Jan 30, 2018 at 9:07 PM, <spfma.tech@e.mail.fr> wrote:
Hi, I am trying to setup a cluster of two nodes, with self hoste Engine. Things went fine for the first machine, but it as rather messy about the second one. I would like to have load balancing and failover for both management network and storage (NFS repository).
So what should I exactly do to get a working network stack which can be recognized when I try to add this host to the cluster ?
Have tried configuring bonds and briges using Cockpit, using manual "ifcfg" files, but all the time I see the bridges and the bonds not linked in the Engine interface, so the new host cannot be enrolled. If I try to link "ovirtmgmt" to the the associated bond, I have a connectivity loss because it is the management device, and I have te restart the network services. As management configuration is not OK, I can't setup the storage connection.
And if I just try to activate the host, I will install and configure things and then complain about missing "ovirtmgmt" and "nfs" networks, which both exist and work and Centos level.
The interface, bonds and bridge names are copy/paste from the first server.
# brctl show ovirtmgmt bridge name bridge id STP enabled interfaces ovirtmgmt 8000.44a842394200 no bond0 # ip addr show bond0 33: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovirtmgmt state UP qlen 1000 link/ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff inet6 fe80::46a8:42ff:fe39:4200/64 scope link valid_lft forever preferred_lft forever # ip addr show em1 2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff # ip addr show em3 4: em3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff
By the way, is it mandatory to stop and disable NetworkManager or not ?
Thanks for any kind of help :-)
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (2)
-
Edward Haas
-
spfma.tech@e.mail.fr