I should add that I do one thing that may be considered unusual. I have a bunch of systems
with 2 1Gb links on them, and I’m building them on one link, then manually converting them
to bonded links before configuring them as ovirt host nodes. Since I have no other
dedicated interfaces, all of my networking depends on the bonded interface for
connectivity.
On Mar 24, 2015, at 11:40 AM, Darrell Budic
<budic(a)onholyground.com> wrote:
> On Mar 23, 2015, at 12:35 PM, Dan Kenigsberg <danken(a)redhat.com> wrote:
>
> On Fri, Mar 20, 2015 at 02:01:25PM -0500, Darrell Budic wrote:
>> I’ve encounter these issues on systems new and upgraded with bonding
>> connections. The new system seems especially bad with bonds, and I’ve
>> taken to immediately switching my hosts to the ifcfg persistence
>> methods. Centos 6 and 7 hosts.
>
> There have been multiple issue regarding net config upgrade. We might
> have nailed an important one regarding ovirt-node.
>
> However, I'd like to learn more about your report regarding new systems.
> Your report sounds similar to
>
> Bug 1203422 - vdsm should restore networks much earlier, to let
> net-dependent services start
Caveat: I don’t have systems available to recreate at this time, so this is from memory
of what I go through on a new host setup.
I havn’t filed bugs because I’ve seen several that look like mine, and until recently, I
couldn’t be sure my problems weren’t being caused by upgrades from older systems. Whenever
I experience issues, it’s related to installing onto a new host system, creating the bonds
either in or outside of ovirt, and the next time I reboot that host, the bonds do not get
created so none of the networks come up and I need to get on a console to fix things.
> If it matters, I’m good with setting up my own network config, and
>> sometimes I REALLY DO NOT WANT ovirt to change them, especially with
>> vlans and gluster co-existance. I can see the goal, but it seems
>> pretty far from it right now, so I’m very happy that there’s a way to
>> switch back to “system” control of those things.
>
> Besides Vdsm slowliness to start the network, what are the reasons for
> your not wanting ovirt to touch your ifcfg? BTW, even today ovirt
> overwrites ifcfg files, but only on network def time, to on every boot.
I don’t actually notice the slowness, but my mgmt, access, and gluster storage networks
depend on the bonded network config to function. I’d like to have them up at boot and not
wait for vdsmd to bring them up. Similar to Bug 1203422, but my problem is that the bonds
don’t get created at boot, so no other networks that depend on them can come up.
Also, I have setup my gluster backend to use specific interfaces and ip addresses, and
I’d like it if Ovirt didn’t mess with them.
These are all things I can work around with ifcfg files, so I prefer them. I’ve taken to
saving my ifcfg-* files so I can easily replace them if ovirt does things to them I don’t
like (like setting ONBOOT=no). I did catch that it only alters them when defining a
network, it does mean I can easily adjust things as needed.