[node-devel] Rethinking implementation for oVirt Node Plugins
Joey Boggs
jboggs at redhat.com
Mon Jan 30 17:36:11 UTC 2012
On 01/30/2012 11:39 AM, Geert Jansen wrote:
> Thanks Perry. I had a look at the Wiki and i think it is mostly very
> reasonable. Some comments/questions:
>
> - Do i understand it correctly that the plugin <-> OS interface you
> propose is "model based" rather than "file based"? All plugins' files
> are under a separate root, and they only become enabled once a logical
> setting in an XML description file enables it (e.g. services, kernel
> modules, etc). I think that is probably the right approach. It will
> make it cleaner for sure to manage plugins this way. It also allows
> much better reporting of what a plugin does in the GUI.
The goal is to have it all separated with minimal overlap such as
suggested earlier with a place to register the plugins that are
installed on the node visible within the web ui.
>
> The alternative would be a "file based" interface where the plugin
> just puts files directly in various system directories. I think the
> main requirement to ensure is that with a model based approach our
> initial model is rich enough.
>
> - One important thing (actually #1 request so far) is to allow VDSM
> hooks to be installed. So the XML descriptor would need to support that.
Don't see that being an issue, it should be added as a requirement now
rather than later
>
> - The signature infrastructure is something that requires a business
> process at Red Hat to manage the requests of ISVs, signing keys, etc.
> I'm skeptical if we need this for v1 to be honest. Feedback i have
> from vendors so far is that they'd prefer, at least initially, to keep
> it open. We still maintain list of plugins that are supported, but
> initially the overhead of basically runnning a PKI for ISVs is
> something that i wonder we want to do on a short time scale. I think
> the risks for abuse, for sure initially, is very low.
>
> - I think the offline merging of plugins with the hypervisor ISO is
> good enough for v1, given that a re-install of RHEV-H via RHEV-M is
> easy. So doing a live install only in v2 should be fine; however:
>
> - We should also allow plugins to be installed on RHEL... Otherwise
> ISVs have to maintain two packages for their extensions, one for RHEL
> and one for RHEV-H.
As long as vdsm looks in the same predefined places this will not be an
issue and should be fine for both scenarios
>
> Regards,
> Geert
>
>
> On 01/29/2012 04:29 PM, Perry Myers wrote:
>> Geert, Itamar mentioned to me that you might have interest in this topic
>> and possibly from your experience some thoughts on requirements here.
>> Take a look at the below email and also the wiki page mentioned and let
>> us know if you have any thoughts. Thanks!
>>
>> (For others following this thread, there are some other points made
>> below the original message from discussion w/ Itamar)
>>
>> On 01/26/2012 10:47 AM, Perry Myers wrote:
>>> The current thinking/design around doing oVirt Node Plugins is here:
>>> http://ovirt.org/wiki/Node_plugins
>>>
>>> And is based mostly on the premise that: * Plugins are self contained
>>> blobs of RPMs that are internally dependency complete * Plugins are
>>> installed via smth like rpm -Uvh of a set of RPMs contained inside
>>> the blob (tarball likely)
>>>
>>> As I was thinking about some additional use cases for plugins (like
>>> including CIM/tog-pegasus and making vdsm a plugin), it seems like a
>>> lot of redundancy to pull packages out of Fedora repos, and stick
>>> them in a tarball when there are perfectly good yum mirrors that have
>>> those packages.
>>>
>>> It's also a lot of overhead on the part of the plugin creators to be
>>> doing dual updates: Update RPM in Fedora and simultaneously update
>>> and publish a new plugin.
>>>
>>> The core problem is: remote retrieval of packages and dependency
>>> resolution... wait, doesn't yum solve that set of problems?
>>>
>>> But there's no yum on oVirt Node... The original reasons for
>>> excluding yum were: * No python on the node (but vdsm pulled python
>>> in, so that's moot now) * Don't want folks running yum on a live
>>> oVirt Node image (we can address that by making yum impossible to run
>>> when the image is booted vs. offline)
>>>
>>> So I'd like to rethink the plugins concept by first starting with
>>> putting yum back on oVirt Node, and leveraging that for what it is
>>> good at.
>>>
>>> If we put yum on the node, then plugin installation could be as
>>> simple as:
>>>
>>> mount ISO cp foo.repo /etc/yum.conf.d/ yum install foo
>>> --enablerepo=foo
>>>
>>> If offline is desired, then the plugin is basically a repo inside a
>>> tarball and you do
>>>
>>> mount ISO cp foo.repo /etc/yum.conf.d/ yum localinstall
>>> foo/repo/foo.rpm
>>>
>>> In either case, we could enforce offline vs. online plugin
>>> integration by always setting all repo files to disabled, and
>>> manually needing to enable them with --enablerepo=* if the user is
>>> doing an online plugin
>>>
>>> So a plugin could just simply be: * repo file (with one or more repo
>>> definitions that are not included in the base distro) * rpm list *
>>> blacklisting info * optionally a collection of RPMs with repo
>>> metadata
>>>
>>> This means that we can let normal yum dep resolution work and
>>> plugins essentially become dependent on things like 'what version of
>>> ovirt-node is installed' or 'what version of the kernel is installed'
>>> and if dependencies aren't met, the plugin installation should fail
>>> gracefully
>>>
>>> We can prevent _core_ files from being upgraded (like ovirt-node,
>>> kernel, etc) by adding explicit excludepkg directives so that if a
>>> plugin tries to bring in a version of a package already core to
>>> oVirt Node, it fails and reports "dude, you need a newer ISO
>>> already"
>>>
>>> Thoughts? This should make the plugin concept easier to implement
>>> and also allow us to include support for plugins that pull packages
>>> from remote repositories much easier.
>>
>> Will the rpm's survive node upgrade?
>> ------------------------------------
>> Only if the image you are upgrading with has also had the appropriate
>> plugins merged into it. The proper procedure would be:
>>
>> * Get ISOv1
>> * Run plugin tool to merge in Plugin1, 2, 3
>> * Deploy
>>
>> Later:
>> * ISOv2 comes out
>> * Get ISOv2
>> * Run plugin tool to merge in Plugin1, 2, 3
>> * Deploy
>>
>> If you merge in the plugins you want onto every ISO, you're fine. But
>> if you decide that you don't like Plugin3, you would do:
>>
>> * Get ISOv2
>> * Run plugin tool to merge in Plugin1, 2
>> * Deploy
>>
>> And in this case, the reinstalled/updated node would only have Plugin1,2
>> and not Plugin3
>>
>> As far as I understand this is the behavior that is wanted. Especially
>> since the long term is to move to a completely stateless Node where
>> nothing is persisted to disk aside from swap partition.
>>
>> How will oVirt Engine know what plugins a Node has installed?
>> -------------------------------------------------------------
>> Since plugins are just normal RPMs, there won't be any way to figure out
>> from a rpm -qa command 'which plugins are installed', but since each
>> plugin is a separate entity with a metadata file, we'll maintain a
>> registry of which plugins are installed and what version each is at.
>> Something like:
>>
>> /etc/ovirt.plugins.d/cim
>> /etc/ovirt.plugins.d/vdsm
>> /etc/ovirt.plugins.d/isv-module-foo
>>
>> And vdsm can look at this to determine what to report back to oVirt
>> Engine for display to the user.
>
More information about the node-devel
mailing list