[node-devel] Rethinking implementation for oVirt Node Plugins
Perry Myers
pmyers at redhat.com
Sun Jan 29 15:29:01 UTC 2012
Geert, Itamar mentioned to me that you might have interest in this topic
and possibly from your experience some thoughts on requirements here.
Take a look at the below email and also the wiki page mentioned and let
us know if you have any thoughts. Thanks!
(For others following this thread, there are some other points made
below the original message from discussion w/ Itamar)
On 01/26/2012 10:47 AM, Perry Myers wrote:
> The current thinking/design around doing oVirt Node Plugins is here:
> http://ovirt.org/wiki/Node_plugins
>
> And is based mostly on the premise that: * Plugins are self contained
> blobs of RPMs that are internally dependency complete * Plugins are
> installed via smth like rpm -Uvh of a set of RPMs contained inside
> the blob (tarball likely)
>
> As I was thinking about some additional use cases for plugins (like
> including CIM/tog-pegasus and making vdsm a plugin), it seems like a
> lot of redundancy to pull packages out of Fedora repos, and stick
> them in a tarball when there are perfectly good yum mirrors that have
> those packages.
>
> It's also a lot of overhead on the part of the plugin creators to be
> doing dual updates: Update RPM in Fedora and simultaneously update
> and publish a new plugin.
>
> The core problem is: remote retrieval of packages and dependency
> resolution... wait, doesn't yum solve that set of problems?
>
> But there's no yum on oVirt Node... The original reasons for
> excluding yum were: * No python on the node (but vdsm pulled python
> in, so that's moot now) * Don't want folks running yum on a live
> oVirt Node image (we can address that by making yum impossible to run
> when the image is booted vs. offline)
>
> So I'd like to rethink the plugins concept by first starting with
> putting yum back on oVirt Node, and leveraging that for what it is
> good at.
>
> If we put yum on the node, then plugin installation could be as
> simple as:
>
> mount ISO cp foo.repo /etc/yum.conf.d/ yum install foo
> --enablerepo=foo
>
> If offline is desired, then the plugin is basically a repo inside a
> tarball and you do
>
> mount ISO cp foo.repo /etc/yum.conf.d/ yum localinstall
> foo/repo/foo.rpm
>
> In either case, we could enforce offline vs. online plugin
> integration by always setting all repo files to disabled, and
> manually needing to enable them with --enablerepo=* if the user is
> doing an online plugin
>
> So a plugin could just simply be: * repo file (with one or more repo
> definitions that are not included in the base distro) * rpm list *
> blacklisting info * optionally a collection of RPMs with repo
> metadata
>
> This means that we can let normal yum dep resolution work and
> plugins essentially become dependent on things like 'what version of
> ovirt-node is installed' or 'what version of the kernel is installed'
> and if dependencies aren't met, the plugin installation should fail
> gracefully
>
> We can prevent _core_ files from being upgraded (like ovirt-node,
> kernel, etc) by adding explicit excludepkg directives so that if a
> plugin tries to bring in a version of a package already core to
> oVirt Node, it fails and reports "dude, you need a newer ISO
> already"
>
> Thoughts? This should make the plugin concept easier to implement
> and also allow us to include support for plugins that pull packages
> from remote repositories much easier.
Will the rpm's survive node upgrade?
------------------------------------
Only if the image you are upgrading with has also had the appropriate
plugins merged into it. The proper procedure would be:
* Get ISOv1
* Run plugin tool to merge in Plugin1, 2, 3
* Deploy
Later:
* ISOv2 comes out
* Get ISOv2
* Run plugin tool to merge in Plugin1, 2, 3
* Deploy
If you merge in the plugins you want onto every ISO, you're fine. But
if you decide that you don't like Plugin3, you would do:
* Get ISOv2
* Run plugin tool to merge in Plugin1, 2
* Deploy
And in this case, the reinstalled/updated node would only have Plugin1,2
and not Plugin3
As far as I understand this is the behavior that is wanted. Especially
since the long term is to move to a completely stateless Node where
nothing is persisted to disk aside from swap partition.
How will oVirt Engine know what plugins a Node has installed?
-------------------------------------------------------------
Since plugins are just normal RPMs, there won't be any way to figure out
from a rpm -qa command 'which plugins are installed', but since each
plugin is a separate entity with a metadata file, we'll maintain a
registry of which plugins are installed and what version each is at.
Something like:
/etc/ovirt.plugins.d/cim
/etc/ovirt.plugins.d/vdsm
/etc/ovirt.plugins.d/isv-module-foo
And vdsm can look at this to determine what to report back to oVirt
Engine for display to the user.
More information about the node-devel
mailing list