On 04/09/2014 06:41 AM, Paul Jansen wrote:
>
>
> ----- Original Message -----
>> From: "Paul Jansen"
>> To: "Itamar Heim" "Fabian Deutsch"
>> Cc: "users"
>> Sent: Monday, April 7, 2014 3:25:19 PM
>> Subject: Re: [Users] node spin including qemu-kvm-rhev?
>>
>> On 04/07/2014 11:46 AM, Fabian Deutsch wrote:
>>
>>> Hey Paul,
>>>
>>> Am Montag, den 07.04.2014, 01:28 -0700 schrieb Paul Jansen:
>>>> I'm going to try top posting this time to see if it ends up looking
a
>>>> bit better on the list.
>>>
>>> you could try sending text-only emails :)
>>>
>>>> By the 'ovirt hypervisor packages' I meant installing the OS
first of
>>>> all and then making it into an ovirt 'node' by installing the
required
>>>> packages, rather than installing from a clean slate with the ovirt
>>>> node iso. Sorry if that was a bit unclear.
>>>
>>> Okay - thanks for the explanation.
>>> In general I would discourage from installing the ovirt-node
package ona
>>> normal host.
>>> If you still want to try it be aware that the ovirt-node pkg might mess
>>> with your system.
>>
>>
>> I'm pretty sure we are on the same page here. I just checked the ovirt
>> 'quickstart' page and it calls the various hypervisor nodes
'hosts'.
>> ie: Fedora host, EL, host, ovirt node host.
>> If the ovirt node included the qemu-kvm-rhev package - or an updated
qemu-kvm
>> - it would mean that both ovirt node hosts and fedora hosts could both
>> support live storage migration. It would only be EL hosts that do not
>> support that feature at this stage. We could have a caveat in the
>> documentation for this perhaps.
>> Fabian, were you think thinking that if not all 'hosts' supported live
>> migration that the cluster could disable that feature? Based on
capabilities
>> that the hosts would expose to the ovirt server? This would be
another way
>> of avoiding the confusion.
>>
>> Thanks guys for the great work you are doing with ovirt.
>>
>
> Paul,
> this is something that vdsm needs to report to the engine, so the
engine will
> know what is / isn't supported. It's a bigger request as today we're
mostly
> based on cluster compatibility level.
>
> Additionally it is possible to mix .el hosts with nodes with old (non
-rhev) nodes.
> Each of these cases will break live-storage migration.
>
> How do you suggest to mitigate it?
>
Well, when you choose to migrate a VM under Vmware's Vcenter you can
choose to migrate either the host or the datastore. For whichever one
you choose there is a validation step to check the you are able to
perform the migration (ie: capabilities of the host). I can see in
ovirt that we are showing the KVM version on hosts. This matches the
package version of the qemu-kvm package (or the qemu-kvm-rhev package if
installed?). Could we have some sort of a cutoff point where we know
which versions of KVM (ie: qemu-kvm or qemu-kvm-rhev) support the
storage migration feature, and if a version is found that doesn't match
the required heuristics we just indicate the the validation process for
the migration has failed?
We could provide some small output indicating why it has failed.
Does this sound like a reasonable approach?
Is there any news on discussions with the CentOS people as to where a
qemu-kvm-rhev package could be hosted (that we could then take advantage
of)?
If the hosting of an updated qemu-kvm (or qemu-kvm-rhev) is not sorted
out in the short term, I did some quick calculations last night and it
seems based on previous EL point releases (this is hardly scientific I
know) we are not likely to see an EL 6.6 for another few months. We may
see an EL7.0 before that timeframe.
Ovirt can obviously jump on new EL releases to use as hosts in a new
ovirt version but it seems this option is still some time away.
Paul - just to clarify, you are mentioning "versions" all the time, but
qemu-kvm doesn't have these features regardless of version. you need the
qemu-kvm-rhev package, which we hope to get into the CentOS cloud or
virt SIG, and for now is available via
jenkins.ovirt.org based on the
centos srpm.