[node-devel] Needed: Node and Engine cooperation
Dan Kenigsberg
danken at redhat.com
Mon Oct 28 00:39:48 UTC 2013
On Sun, Oct 27, 2013 at 01:06:06PM +0100, Fabian Deutsch wrote:
> Am Sonntag, den 27.10.2013, 06:58 -0400 schrieb Moti Asayag:
> >
> > ----- Original Message -----
> > > From: "Fabian Deutsch" <fabiand at redhat.com>
> > > To: "arch" <arch at ovirt.org>, "node-devel" <node-devel at ovirt.org>
> > > Sent: Monday, October 21, 2013 8:45:41 PM
> > > Subject: [node-devel] Needed: Node and Engine cooperation
> > >
> > > Hey,
> > >
> > > with the extraction of the oVirt Engine / VDSM specific bits from Node
> > > in it's 3.0 release, oVirt Node became unaware of when it is being
> > > managed.
> > > Pre-3.0 Node (it's TUI) had specific knowledge about what configuration
> > > files existed when it was registered to Engine. This is not the case in
> > > Node 3.0 anymore. And this leads to problems. E.g. a user removing
> > > Engines network layout.
> > >
> > > A new way is needed to pass informations between the management instance
> > > and Node's core. This informations are needed e.g. to prevent the user
> > > from accidentally destroying Engines network layout on a Node.
> >
> > How is it different from an admin connecting to non ovirt-node host and manually
> > dis-configure its network ?
>
> You are right that there is not really a difference between those both
> scenarios.
> If vdsm can cope with this then this shouldn't be a problem.
> My assumption was that vdsm had problems when the network configuration
> got changed on a different way than through vdsm.
> If vdsm is fine with this - the network configuration changed by the
> user - then this is fine and we don't have a problem.
Vdsm is not "fine" with arbitrary changes to network configuration done
under its feet. If you're configuring an oVirt node, we strongly
recommend doing it via Engine. Anything else is likely to break
something or to be overridden by Engine. Let alone trigger evil races
within initscripts or Vdsm.
For plain (non ovirt-node) hosts, we trust admins to know what they are
doing. The premise of ovirt-node is a bit different: it's all about
hard-to-tweak-and-break.
As much as I personally hate when my admin hands are tied by an
application, I think it is sensible for the TUI to report which Engine
controls it, and to lock the network configuration page when the node is
remote-controlled.
However, the TUI should allow explicit unlocking of the
"remote-controlled".
>
> > I'm not sure we need to prevent from the administrator to perform any manual
> > changes on the host. Perhaps the TUI could reflect the networks name by querying
> > vdsm/libvirt in the same sense as the engine does so the user will be aware which
> > interfaces carry logical networks.
>
> The problem here is that the TUI is not aware of vdsm. That's why I
> suggest that VDSM is publishing these informations through e.g. the
> mechanism which is mentioned in [0] or also maybe through
> http://wiki.ovirt.org/Features/Node/FeaturePublishing
>
> Greetings
> fabian
>
> > >
> > > I've opened a bug [0] to suggest a way of sharing this kind of
> > > informations.
> > >
> > > The idea is that Node and the management instance - Engine - share a set
> > > of common configuration keys in /etc/default/ovirt to pass the relevant
> > > bit's to Node.
> > > For now I thought about this three keys:
> > >
> > >
> > > OVIRT_MANAGED_BY=<vendor>
> > > This key is used to (a) signal the Node is being managed and (b)
> > > signaling who is managing this node.
"vendor" is less interesting than the managing app, and the location of
its access point.
> > >
> > > OVIRT_MANAGED_IFNAMES=<ifname>[,<ifname>,...]
> > > This key is used to specify a number (comma separated list) of ifnames
> > > which are managed and for which the TUI shall display some information
> > > (IP, ...).
> > > This can also be used by the TUI to decide to not offer NIC
> > > configuration to the user.
I do not see the benefit of this. All (non-wifi) nics of a host are
reported by Vdsm to Engine and thus manage-able by the latter.
> > >
> > > OVIRT_MANAGED_LOCKED_PAGES=<pagename>[,<pagename>,...]
> > > (Future) A list of pages which shall be locked e.g. because the
> > > management instance is configuring the aspect (e.g. networking or
> > > logging).
> > >
> > >
> > > The third one (OVIRT_MANAGED_LOCKED_PAGES) needs a tighter integration
> > > and might be relevant in the future, but the first two should really be
> > > implemented quickly for the reasons given above.
.. but that's the only thing we need...
> > >
> > > It is quit elate in the development process but probably worth to think
> > > about getting this into 3.3.1, to prevent all sorts of (accidentally)
> > > user-driven collisions between Node and Engine.
Please do not delay the 3.3.1 beta for this. I prefer a release note:
"do not attempt to configure node networking when registered to Engine,
unless you really know what your are doing."
More information about the Arch
mailing list