feature suggestion: initial generation of management network

Alon Bar-Lev alonbl at redhat.com
Mon May 20 11:58:05 UTC 2013


Hi,

Now another issue... ovirt-node.

In ovirt-node, the node already defines a bridge which is called br at INTERFACE@, this is done automatically.
The IP address of ovirt-node is assigned to that bridge, so we always have a bridge at ovirt-node.

I have the following useless in my code, most is legacy... the question...
Can this also be automated by the new code at engine side?
It should or things will break...

Thanks,
Alon

---

        if (
            self.environment[odeploycons.VdsmEnv.OVIRT_NODE] and
            self._interfaceIsBridge(name=interface)
        ):
            nic = interface.replace('br', '', 1)
            self._removeBridge(
                name=interface,
                interface=nic,
            )
            interface = nic


    def _removeBridge(self, name, interface):
        interface, vlanid = self._getVlanMasterDevice(name=interface)
        self.execute(
            (
                os.path.join(
                    odeploycons.FileLocations.VDSM_DATA_DIR,
                    'delNetwork',
                ),
                name,
                vlanid if vlanid is not None else '',
                '',     # bonding is not supported
                interface if interface is not None else '',
            ),
        )

        #
        # vdsm interface does not handle
        # ovirt node properly.
        # we should manually delete the
        # ifcfg file to avoid having duplicate
        # bridge.
        #
        if self.environment[odeploycons.VdsmEnv.OVIRT_NODE]:
            ifcfg = '/etc/sysconfig/network-scripts/ifcfg-%s' % (
                name
            )
            if os.path.exists(ifcfg):
                from ovirtnode import ovirtfunctions
                ovirtfunctions.ovirt_safe_delete_config(ifcfg)

----- Original Message -----
> From: "Livnat Peer" <lpeer at redhat.com>
> To: "Dan Kenigsberg" <danken at redhat.com>
> Cc: "arch" <arch at ovirt.org>, alonbl at redhat.com, "Simon Grinberg" <sgrinber at redhat.com>, "Andrew Cathrow"
> <acathrow at redhat.com>, "Moti Asayag" <masayag at redhat.com>, "Barak Azulay" <bazulay at redhat.com>
> Sent: Monday, May 20, 2013 2:49:18 PM
> Subject: Re: feature suggestion: initial generation of management network
> 
> This is a summary of the thread so far (and the AI)-
> 
> - There is an agreement we do not need machine boot in the installation
> sequence.
> 
> - The current default behavior is to reboot after host installation (in
> Virt)
> 
> ** We are going to change current behavior in 3.3 and remove the reboot
> from the host installation flow **
> 
> - Today we have a flag in the REST API to avoid host reboot,we'll
> deprecate this flag since this is going to be the default behavior after
> the change (and booting after installation won't be available).
> 
> - Since host reboot is not needed in the host install flow we avoid
> adding VDSM verb for reboot at this point. The discussion if to do such
> a verb via ssh or VDSM can be done in the context were the verb is going
> to be used.
> 
> 
> Thanks, Livnat
> 
> 
> 
> 
> On 12/25/2012 02:27 PM, Dan Kenigsberg wrote:
> > Current condition:
> > ==================
> > The management network, named ovirtmgmt, is created during host
> > bootstrap. It consists of a bridge device, connected to the network
> > device that was used to communicate with Engine (nic, bonding or vlan).
> > It inherits its ip settings from the latter device.
> > 
> > Why Is the Management Network Needed?
> > =====================================
> > Understandably, some may ask why do we need to have a management
> > network - why having a host with IPv4 configured on it is not enough.
> > The answer is twofold:
> > 1. In oVirt, a network is an abstraction of the resources required for
> >    connectivity of a host for a specific usage. This is true for the
> >    management network just as it is for VM network or a display network.
> >    The network entity is the key for adding/changing nics and IP
> >    address.
> > 2. In many occasions (such as small setups) the management network is
> >    used as a VM/display network as well.
> > 
> > Problems in current connectivity:
> > ================================
> > According to alonbl of ovirt-host-deploy fame, and with no conflict to
> > my own experience, creating the management network is the most fragile,
> > error-prone step of bootstrap.
> > 
> > Currently it always creates a bridged network (even if the DC requires a
> > non-bridged ovirtmgmt), it knows nothing about the defined MTU for
> > ovirtmgmt, it uses ping to guess on top of which device to build (and
> > thus requires Vdsm-to-Engine reverse connectivity), and is the sole
> > remaining user of the addNetwork/vdsm-store-net-conf scripts.
> > 
> > Suggested feature:
> > ==================
> > Bootstrap would avoid creating a management network. Instead, after
> > bootstrapping a host, Engine would send a getVdsCaps probe to the
> > installed host, receiving a complete picture of the network
> > configuration on the host. Among this picture is the device that holds
> > the host's management IP address.
> > 
> > Engine would send setupNetwork command to generate ovirtmgmt with
> > details devised from this picture, and according to the DC definition of
> > ovirtmgmt.  For example, if Vdsm reports:
> > 
> > - vlan bond4.3000 has the host's IP, configured to use dhcp.
> > - bond4 is comprises eth2 and eth3
> > - ovirtmgmt is defined as a VM network with MTU 9000
> > 
> > then Engine sends the likes of:
> >   setupNetworks(ovirtmgmt: {bridged=True, vlan=3000, iface=bond4,
> >                 bonding=bond4: {eth2,eth3}, MTU=9000)
> > 
> > A call to setSafeNetConfig would wrap the network configuration up.
> > 
> > Currently, the host underegoes a reboot as the last step of bootstrap.
> > This allows us to verify immediately if the host would be accessible
> > post-boot using its new network configuration. If we want to maintain
> > this, Engine would need to send a fenceNode request.
> > 
> > Benefits:
> > =========
> > - Simplified bootstrapping
> > - Simplified ovirt-node registration (similar ovirtmgmt-generation logic
> >   lies there).
> > - Host installation ends with an ovirtmgmt network that matches DC
> >   definition (bridged-ness, mtu, vlan).
> > - vdsm-to-engine connectivity is not required.
> > 
> > Drawbacks:
> > ==========
> > - We need to implement new Engine logic for devising ovirtmgmt definition
> > out of
> >   getVdsCaps output.
> > - ... your input is welcome here
> > 
> > Missing:
> > ========
> > A wiki feature page for this new behavior.
> > 
> > 
> 
> 



More information about the Arch mailing list