[Engine-devel] Open Attestation integration with oVirt engine proposal has submitted patchset5 for your review

Doron Fediuck dfediuck at redhat.com
Tue Apr 2 07:08:07 UTC 2013



----- Original Message -----
> From: "Wei D Chen" <wei.d.chen at intel.com>
> To: "Doron Fediuck" <dfediuck at redhat.com>, "Ofri Masad" <omasad at redhat.com>
> Cc: engine-devel at ovirt.org
> Sent: Friday, March 29, 2013 5:00:55 AM
> Subject: Re: [Engine-devel] Open Attestation integration with oVirt engine proposal has submitted patchset5 for your
> review
> 
> Thanks Doron & Ofri,
> 
> As to the question of cache flash, we already have our consideration and
> wrote them on our design page. I have no doubt that your suggestion is more
> reasonable, we just keep in mind that expiration is much longer that the
> time needed to poll all of hosts, so this is really a potential issue we
> ignored. Let's make estimation at first, we will have a try if our schedule
> is okay.
> 
> Doron, we have reserved some effort to research about cluster-level policy.
> As ovirt is complete new to our engineers, would we finished our current
> features (such as ovf and rest api.) in pipeline at first? After these basic
> features are ready and we still have some buffer, we will make some
> improvement. Is this acceptable?
> 
> Thanks again to Doron and Ofri.
> 
> 
> Best Regards,
> Dave Chen
> 
> 

Hi Dave,
Thanks for your response.

Looking for additional reference I also read [1], which gave me some insights
on your design. There is a difference between both architectures which has a
big impact on the implementation, so I think it would be wise to explain it;

OpenStack currently has no clear definition of cluster as a migration-domain
the same way oVirt has. So this means you cannot use this benefit of oVirt
which provides you a domain where all VMs should be able to freely migrate
from any host to any other host.

In OpenStack you may have more than one scheduling service, which means you
may scale very large numbers of scheduling requests. In oVirt's current 
implementation, it will be handled by the same process, so every scheduling
delay should be carefully considered to avoid a performance hit.

In my suggestion of a cluster level policy, you will get the trust level
you need without changing the oVirt scheduler at all(!!!), so you are
using the current concepts while avoiding any performance issue you may
introduce by adding trust to the scheduling logic. As you can see this
should give you a much cleaner, simpler and safer implementation.

Working in cluster level will require a different implementation in the
engine side to make sure the cluster complies with the trust level you want.
The Attestation broker and caching are still needed, but the whole scheduling
part should be removed. Obviously this will give you a completely different
API as well as UI and potentially OVF, which will make current implementation
redundant. This is not an improvement, rather than a different design. I
strongly suggest you re-think the process to better evaluate both options.

Doron

[1] https://wiki.openstack.org/wiki/TrustedComputingPools

> -----Original Message-----
> From: Doron Fediuck [mailto:dfediuck at redhat.com]
> Sent: Thursday, March 28, 2013 10:43 PM
> To: Ofri Masad
> Cc: engine-devel at ovirt.org; Chen, Wei D
> Subject: Re: [Engine-devel] Open Attestation integration with oVirt engine
> proposal has submitted patchset5 for your review
> 
> ----- Original Message -----
> > From: "Ofri Masad" <omasad at redhat.com>
> > To: "Wei D Chen" <wei.d.chen at intel.com>
> > Cc: engine-devel at ovirt.org
> > Sent: Thursday, March 28, 2013 12:05:02 PM
> > Subject: Re: [Engine-devel] Open Attestation integration with oVirt
> > engine proposal has submitted patchset5 for your review
> > 
> > Hi Dave,
> > 
> > I would like to raise again the question of the full cache flash for
> > each stale cache entry found.
> > This method can cause two unwanted situations:
> >  1. Choosing untrusted host: lets say, for example that you have 1000
> > host in your pool. you look at the first host in the cache and find
> > that its attestation hat expired. you refresh the entire pool  (there
> > are 1000 host, that must take some time). by the the time  the last
> > host was refreshed in the pool, the first host may already  be expired
> > again. but since you already checked it - you keep on  with your flow
> > and select that host, even so it has expired and may  as well be
> > untrusted.
> > 
> >  2. infinite loop: lets say we'll try to fix what I've described in
> > 1. then, we need to check again if the host has expired before we
> > select it. if it is, the entire refresh process starts again. this
> > could potentially go on forever (unless I'm missing something, and
> > the expiration is much longer then the full re-cache process).
> > 
> > Instead of re-caching the full cache we can do as follows:
> >  - hold the cache entries sorted by expiration (if the expiration
> > time is the same for all hosts, so a queue is enough).
> >  - each time we need a new trusted host - select from the unexpired
> > hosts, refresh all expired hosts (in one query).
> >  - if all hosts are expired - we can wait for the first host to be
> > defined trusted by the attestation server and select that host.
> > 
> > Ofri
> >     
> > 
> 
> Dave, adding another suggestion on top of Ofri's;
> 
> Generally speaking, a cluster of hosts defines many joint features (such as
> CPU level), which means that in the same cluster we would expect to be able
> to freely migrate a VM from one host to another.
> 
> Current trust-pools design is breaking this concept, as you introduce a state
> where a VM cannot migrate from a 'safe' host into an 'unsafe'
> host.
> 
> This leads me to the suggestion of having attestation as a cluster policy
> rather than a VM-level property. It means that all hosts in this cluster are
> constantly being monitored to be safe. If a host is declared as unsafe in
> the Attestation server, it will become non-operational in the engine. This
> will simplify the implementation since you have everything ready for you
> once you have a 'safe' cluster and no need to do any VM-level changes.
> 
> So in this way you keep current concepts while simplifying the implementation
> with very little worries of performance issues.
> 
> Can you please share your thoughts on this suggestion?
> 
> > ----- Original Message -----
> > > From: "Wei D Chen" <wei.d.chen at intel.com>
> > > To: engine-devel at ovirt.org
> > > Sent: Friday, March 22, 2013 11:34:55 AM
> > > Subject: [Engine-devel] Open Attestation integration with oVirt
> > > engine proposal has submitted patchset5 for your review
> > > 
> > > Hi all,
> > > 
> > > Before submitting this patch set, we has updated our design page,
> > > and new feature about VM template has added to this patchset. In
> > > patchset a lot of frontend changes has been imported.
> > > Welcome to review our patchset and thanks advance for your
> > > suggestion.
> > > 
> > > 
> > > Detailed description: http://wiki.ovirt.org/Trusted_compute_pools
> > > 
> > > In this patch set, follow changes has been introduced:
> > > 
> > > 1. GUI changes to support for creating a trusted VM on a trusted
> > > physical host.
> > > 2. View/Edit VM changes to enable end user switch between three run
> > > on options.
> > > 3. Template relevant changes to support end user create a trusted VM
> > > template and create trusted VM based on this template afterwards.
> > > 4. Bug fixing and code cleanup.
> > > 5. wiki design page update.
> > > 
> > > 
> > > 
> > > Best Regards,
> > > Dave Chen
> > > 
> > > 
> _______________________________________________
> Engine-devel mailing list
> Engine-devel at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/engine-devel
> 



More information about the Engine-devel mailing list