----- Original Message -----
From: "Dan Kenigsberg" <danken(a)redhat.com>
To: "Simon Grinberg" <simon(a)redhat.com>
Cc: users(a)ovirt.org, "Tom Brown" <tom(a)ng23.net>
Sent: Thursday, January 10, 2013 2:09:28 PM
Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
On Wed, Jan 09, 2013 at 11:34:56AM -0500, Simon Grinberg wrote:
>
>
> ----- Original Message -----
> > From: "Dan Kenigsberg" <danken(a)redhat.com>
> > To: "Simon Grinberg" <simon(a)redhat.com>
> > Cc: users(a)ovirt.org, "Tom Brown" <tom(a)ng23.net>
> > Sent: Wednesday, January 9, 2013 6:20:02 PM
> > Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> >
> > On Wed, Jan 09, 2013 at 09:05:37AM -0500, Simon Grinberg wrote:
> > >
> > >
> > > ----- Original Message -----
> > > > From: "Dan Kenigsberg" <danken(a)redhat.com>
> > > > To: "Tom Brown" <tom(a)ng23.net>
> > > > Cc: "Simon Grinberg" <sgrinber(a)redhat.com>,
users(a)ovirt.org
> > > > Sent: Wednesday, January 9, 2013 2:11:14 PM
> > > > Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
> > > >
> > > > On Wed, Jan 09, 2013 at 10:06:12AM +0000, Tom Brown wrote:
> > > > >
> > > > >
> > > > > >> libvirtError: internal error Process exited while
> > > > > >> reading
> > > > > >> console log outpu
> > > > > > could this be related to selinux? can you try disabling
> > > > > > it
> > > > > > and
> > > > > > see if migration succeeds?
> > > > >
> > > > > It was indeed the case! my src node was set to disabled and
> > > > > my
> > > > > destination node was enforcing, this was due to the
> > > > > destination
> > > > > being the first HV built and therefore provisioned slightly
> > > > > differently, my kickstart server is a VM in the pool.
> > > > >
> > > > > Its interesting that a VM can be provisioned onto a node
> > > > > that
> > > > > is
> > > > > set to enforcing and yet not migrated to.
> > > >
> > > > I have (only a vague) memory of discussing this already...
> > > > Shouldn't oVirt-Engine be aware of selinux enforcement? If a
> > > > cluster
> > > > has
> > > > disabled hosts, an enforcing host should not be operational
> > > > (or
> > > > at
> > > > least
> > > > warn the admin about that).
> > >
> > >
> > > I recall something like that, but I don't recall we ever
> > > converged
> > > and can't find the thread
> >
> > What is your opinion on the subject?
> >
> > I think that at the least, the scheduler must be aware of selinux
> > enforcement when it chooses migration destination.
> >
>
> Either all or non in the same cluster - that is the default.
>
> On a mixed environment, the non enforced hosts should be move to
> non-operational, but VM should not be migrated off due to this, we
> don't want them moved to protected hosts without the admin
> awareness.
>
> As exception to the above, have a config parameter that allows in a
> mixed environment to migrate VMs from an insecure onto a secure
> host never the other way around. This is to support transition
> from non-enabled system to enabled.
Please see Tom's report above:
> > > > > It was indeed the case! my src node was set to disabled and
> > > > > my
> > > > > destination node was enforcing ...
We apparently cannot migrate an insecure guest into an enforcing
system.
Well you've asked for my opinion not current implementation :)
I'm not sure anything was implemented on the selinux requirements, need to check this,
the error I see in this thread is run time failure due to improper setting, which is to be
expected on migration from non labelled zone into a labelled zone.
>
> I think this is the closest I can get to the agreement (or at least
> concerns) raised in that old thread I can't find.