[node-devel] Rethinking implementation for oVirt Node Plugins

kmestery kmestery at cisco.com
Thu Jan 26 15:54:22 UTC 2012


On Jan 26, 2012, at 9:47 AM, Perry Myers wrote:
> The current thinking/design around doing oVirt Node Plugins is here:
> http://ovirt.org/wiki/Node_plugins
> 
> And is based mostly on the premise that:
> * Plugins are self contained blobs of RPMs that are internally
>  dependency complete
> * Plugins are installed via smth like rpm -Uvh of a set of RPMs
>  contained inside the blob (tarball likely)
> 
> As I was thinking about some additional use cases for plugins (like
> including CIM/tog-pegasus and making vdsm a plugin), it seems like a lot
> of redundancy to pull packages out of Fedora repos, and stick them in a
> tarball when there are perfectly good yum mirrors that have those packages.
> 
> It's also a lot of overhead on the part of the plugin creators to be
> doing dual updates: Update RPM in Fedora and simultaneously update and
> publish a new plugin.
> 
> The core problem is: remote retrieval of packages and dependency
> resolution... wait, doesn't yum solve that set of problems?
> 
> But there's no yum on oVirt Node...  The original reasons for excluding
> yum were:
> * No python on the node (but vdsm pulled python in, so that's moot now)
> * Don't want folks running yum on a live oVirt Node image (we can
>  address that by making yum impossible to run when the image is booted
>  vs. offline)
> 
> So I'd like to rethink the plugins concept by first starting with
> putting yum back on oVirt Node, and leveraging that for what it is good at.
> 
> If we put yum on the node, then plugin installation could be as simple as:
> 
> mount ISO
> cp foo.repo /etc/yum.conf.d/
> yum install foo --enablerepo=foo
> 
> If offline is desired, then the plugin is basically a repo inside a
> tarball and you do
> 
> mount ISO
> cp foo.repo /etc/yum.conf.d/
> yum localinstall foo/repo/foo.rpm
> 
> In either case, we could enforce offline vs. online plugin integration
> by always setting all repo files to disabled, and manually needing to
> enable them with --enablerepo=* if the user is doing an online plugin
> 
> So a plugin could just simply be:
> * repo file (with one or more repo definitions that are not included in
>  the base distro)
> * rpm list
> * blacklisting info
> * optionally a collection of RPMs with repo metadata
> 
> This means that we can let normal yum dep resolution work and plugins
> essentially become dependent on things like 'what version of ovirt-node
> is installed' or 'what version of the kernel is installed' and if
> dependencies aren't met, the plugin installation should fail gracefully
> 
> We can prevent _core_ files from being upgraded (like ovirt-node,
> kernel, etc) by adding explicit excludepkg directives so that if a
> plugin tries to bring in a version of a package already core to oVirt
> Node, it fails and reports "dude, you need a newer ISO already"
> 
> Thoughts?  This should make the plugin concept easier to implement and
> also allow us to include support for plugins that pull packages from
> remote repositories much easier.
> 
I like this design a lot, because it does leverage the things which yum is good at, and makes the job of plugin writers much easier. My only question is, what is the main difference in offline vs. online if all repo files are always set to disabled? In either case, wouldn't you need to do a --enablerepo=*?

Thanks,
Kyle

> Perry
> _______________________________________________
> node-devel mailing list
> node-devel at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/node-devel





More information about the node-devel mailing list