[node-devel] Rethinking implementation for oVirt Node Plugins

Perry Myers pmyers at redhat.com
Tue Jan 31 16:40:56 UTC 2012


On 01/30/2012 12:36 PM, Joey Boggs wrote:
> On 01/30/2012 11:39 AM, Geert Jansen wrote:
>> Thanks Perry. I had a look at the Wiki and i think it is mostly very
>> reasonable. Some comments/questions:
>>
>>  - Do i understand it correctly that the plugin <-> OS interface you
>> propose is "model based" rather than "file based"? All plugins' files
>> are under a separate root, and they only become enabled once a logical
>> setting in an XML description file enables it (e.g. services, kernel
>> modules, etc). I think that is probably the right approach. It will
>> make it cleaner for sure to manage plugins this way. It also allows
>> much better reporting of what a plugin does in the GUI.
> 
> The goal is to have it all separated with minimal overlap such as
> suggested earlier with a place to register the plugins that are
> installed on the node visible within the web ui.
>>
>> The alternative would be a "file based" interface where the plugin
>> just puts files directly in various system directories. I think the
>> main requirement to ensure is that with a model based approach our
>> initial model is rich enough.

Actually I think this is a hybrid of the two models you describe above.

On the one hand, what is being installed are just 'normal RPMs' and so
from that perspective the RPM used for vdsm should be no different on
oVirt Node from Fedora usage.  It should put the files where they
normally go in directories like /usr/bin, etc.

But on the other hand, being a plugin implies a higher level model.  So
the 'top level' RPM in a given plugin (like vdsm) should include some
metadata about itself so that we can do things like 'report which
plugins are installed' back to oVirt Engine

That being said...  RPM has a rich metadata structure, so can't we use
some of RPM metadata to identify a particular package (vdsm, cim-plugin,
etc) as such?

That way the process of finding out 'which plugins are installed' could
be simply rpm queries looking for which rpms are classified as plugins.

There is other metadata needed like:
* firewall ports
* services that need to start

But this is normally handled in RPM %post anyhow.

>>  - One important thing (actually #1 request so far) is to allow VDSM
>> hooks to be installed. So the XML descriptor would need to support that.
> 
> Don't see that being an issue, it should be added as a requirement now
> rather than later

ack

Do we have an example of a vdsm hook RPM upstream that we can use as a
test case?

>>  - The signature infrastructure is something that requires a business
>> process at Red Hat to manage the requests of ISVs, signing keys, etc.
>> I'm skeptical if we need this for v1 to be honest. Feedback i have
>> from vendors so far is that they'd prefer, at least initially, to keep
>> it open. We still maintain list of plugins that are supported, but
>> initially the overhead of basically runnning a PKI for ISVs is
>> something that i wonder we want to do on a short time scale. I think
>> the risks for abuse, for sure initially, is very low.

Right.  I think even for upstream we should do signature verification.
Here's a model that should work:

1. Upstream developer wants to create new oVirt Node plugin foo (and
   foo is not part of Fedora, if it is, we just leverage normal Fedora
   package signatures)

2. Upstream developer has to create a keypair, and they sign their
   package foo with their private key.  Meanwhile, they need to
   publicly distribute their public key

3. The plugin tool should be able to import a new public key into the
   oVirt Node image

4. Once this public key is imported, then a subsequent install of foo
   should work since the signature will be able to be verified

>From a product perspective, perhaps the ability to do step 3) is
restricted to the company distributing the oVirt Node derivative.  But
that's not necessarily a topic for upstream to decide :)

>>  - I think the offline merging of plugins with the hypervisor ISO is
>> good enough for v1, given that a re-install of RHEV-H via RHEV-M is
>> easy. So doing a live install only in v2 should be fine; however:

Agreed.  The other thing to consider is that with live installs, because
oVirt Node is stateless you'd have to completely reinstall all plugins
on every boot which may end up being a source of additional
complexity/errors.

So while live plugin installation is theoretically possible, I want to
avoid it at least for the time being

>>  - We should also allow plugins to be installed on RHEL... Otherwise
>> ISVs have to maintain two packages for their extensions, one for RHEL
>> and one for RHEV-H.
> 
> As long as vdsm looks in the same predefined places this will not be an
> issue and should be fine for both scenarios

Actually... if plugins are simply 'the normal Fedora/RhEL package with
some additional RPM metadata and possibly some additional other
metadata' then this becomes trivial.  So completely agree

So moving things to be more RPM/yum oriented I think buys us a lot here
in terms of not reinventing the wheel.

As mburns has pointed out to me, the hard work here is going to be:

* determining what the necessary metadata is that we need to insert
  into the spec file or other metadata file

* determining how oVirt Node should handle that metadata.  Example:
  + package foo says it needs port X opened.  It can either do this in
    %post via smth like lokkit or it can just specify this in the
    metadata
  + If done via lokkit, we need oVirt Node to be able to emulate the
    lokkit command so that we 'do the right thing' in the firewall
  + If done via metadata we need to have the plugin process that
    metadata so that it makes the appropriate change to the firewall

* Doing the work associated with tracking filesystem/config changes so
  that we can audit each change a plugin makes to the system

Perry



More information about the node-devel mailing list