From: "Michael Kublin" <mkublin(a)redhat.com>
To: "Wei D Chen" <wei.d.chen(a)intel.com>
Cc: engine-devel(a)ovirt.org
Sent: Tuesday, April 2, 2013 2:49:02 PM
Subject: Re: [Engine-devel] Open Attestation integration with oVirt engine proposal has
submitted patchset5 for your
review
----- Original Message -----
> From: "Wei D Chen" <wei.d.chen(a)intel.com>
> To: "Doron Fediuck" <dfediuck(a)redhat.com>, "Ofri Masad"
<omasad(a)redhat.com>
> Cc: engine-devel(a)ovirt.org
> Sent: Friday, March 29, 2013 5:00:55 AM
> Subject: Re: [Engine-devel] Open Attestation integration with oVirt engine
> proposal has submitted patchset5 for your
> review
>
> Thanks Doron & Ofri,
>
> As to the question of cache flash, we already have our consideration and
> wrote them on our design page. I have no doubt that your suggestion is more
> reasonable, we just keep in mind that expiration is much longer that the
> time needed to poll all of hosts, so this is really a potential issue we
> ignored. Let's make estimation at first, we will have a try if our schedule
> is okay.
>
> Doron, we have reserved some effort to research about cluster-level policy.
> As ovirt is complete new to our engineers, would we finished our current
> features (such as ovf and rest api.) in pipeline at first? After these
> basic
> features are ready and we still have some buffer, we will make some
> improvement. Is this acceptable?
>
> Thanks again to Doron and Ofri.
>
>
> Best Regards,
> Dave Chen
>
>
> -----Original Message-----
> From: Doron Fediuck [mailto:dfediuck@redhat.com]
> Sent: Thursday, March 28, 2013 10:43 PM
> To: Ofri Masad
> Cc: engine-devel(a)ovirt.org; Chen, Wei D
> Subject: Re: [Engine-devel] Open Attestation integration with oVirt engine
> proposal has submitted patchset5 for your review
>
> ----- Original Message -----
> > From: "Ofri Masad" <omasad(a)redhat.com>
> > To: "Wei D Chen" <wei.d.chen(a)intel.com>
> > Cc: engine-devel(a)ovirt.org
> > Sent: Thursday, March 28, 2013 12:05:02 PM
> > Subject: Re: [Engine-devel] Open Attestation integration with oVirt
> > engine proposal has submitted patchset5 for your review
> >
> > Hi Dave,
> >
> > I would like to raise again the question of the full cache flash for
> > each stale cache entry found.
> > This method can cause two unwanted situations:
> > 1. Choosing untrusted host: lets say, for example that you have 1000
> > host in your pool. you look at the first host in the cache and find
> > that its attestation hat expired. you refresh the entire pool (there
> > are 1000 host, that must take some time). by the the time the last
> > host was refreshed in the pool, the first host may already be expired
> > again. but since you already checked it - you keep on with your flow
> > and select that host, even so it has expired and may as well be
> > untrusted.
> >
> > 2. infinite loop: lets say we'll try to fix what I've described in
> > 1. then, we need to check again if the host has expired before we
> > select it. if it is, the entire refresh process starts again. this
> > could potentially go on forever (unless I'm missing something, and
> > the expiration is much longer then the full re-cache process).
> >
> > Instead of re-caching the full cache we can do as follows:
> > - hold the cache entries sorted by expiration (if the expiration
> > time is the same for all hosts, so a queue is enough).
> > - each time we need a new trusted host - select from the unexpired
> > hosts, refresh all expired hosts (in one query).
> > - if all hosts are expired - we can wait for the first host to be
> > defined trusted by the attestation server and select that host.
> >
> > Ofri
> >
> >
>
> Dave, adding another suggestion on top of Ofri's;
>
> Generally speaking, a cluster of hosts defines many joint features (such as
> CPU level), which means that in the same cluster we would expect to be able
> to freely migrate a VM from one host to another.
>
> Current trust-pools design is breaking this concept, as you introduce a
> state
> where a VM cannot migrate from a 'safe' host into an 'unsafe'
> host.
>
> This leads me to the suggestion of having attestation as a cluster policy
> rather than a VM-level property. It means that all hosts in this cluster
> are
> constantly being monitored to be safe. If a host is declared as unsafe in
> the Attestation server, it will become non-operational in the engine. This
> will simplify the implementation since you have everything ready for you
> once you have a 'safe' cluster and no need to do any VM-level changes.
>
> So in this way you keep current concepts while simplifying the
> implementation
> with very little worries of performance issues.
>
> Can you please share your thoughts on this suggestion?
>
> > ----- Original Message -----
> > > From: "Wei D Chen" <wei.d.chen(a)intel.com>
> > > To: engine-devel(a)ovirt.org
> > > Sent: Friday, March 22, 2013 11:34:55 AM
> > > Subject: [Engine-devel] Open Attestation integration with oVirt
> > > engine proposal has submitted patchset5 for your review
> > >
> > > Hi all,
> > >
> > > Before submitting this patch set, we has updated our design page,
> > > and new feature about VM template has added to this patchset. In
> > > patchset a lot of frontend changes has been imported.
> > > Welcome to review our patchset and thanks advance for your
> > > suggestion.
> > >
> > >
> > > Detailed description:
http://wiki.ovirt.org/Trusted_compute_pools
> > >
> > > In this patch set, follow changes has been introduced:
> > >
> > > 1. GUI changes to support for creating a trusted VM on a trusted
> > > physical host.
> > > 2. View/Edit VM changes to enable end user switch between three run
> > > on options.
> > > 3. Template relevant changes to support end user create a trusted VM
> > > template and create trusted VM based on this template afterwards.
> > > 4. Bug fixing and code cleanup.
> > > 5. wiki design page update.
> > >
> > >
> > >
> > > Best Regards,
> > > Dave Chen
> > >
> > >
Hi, I read your design and I have some proposal about implementation of
cache.
Currently in project we have couple of places with custom-made cache
implementation,
my advice is to start using something that already written and well tested:
I think that cache of Guava project
(
http://code.google.com/p/guava-libraries/) is good choice.
The following cache has all required functionality and some features that can
help for your implementation,
like Removal Listeners (sync/async) and cache Loaders, also the following
implementation is easy to configure and test.
The code for cache can be put inside utils.
Also, not sure that is a relevant but the following stuff can influence your
feature: remove of host, add of new host,
moving a host to maintenance and after that to other cluster/pool, host
restarts
Thanks Michael!
I suggest you read my mail on moving it to be a cluster level-policy,
which will make caching redundant. In this way a host with a trust level
which does not meet the policy will become non-operational. The process
of validating hosts trust will be in the background similar to the load
balancing we're doing today.