[Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
gleb at redhat.com
Sun Mar 11 09:27:55 EDT 2012
On Sat, Mar 10, 2012 at 12:24:47PM -0600, Anthony Liguori wrote:
> Let's step back here.
> Why are you writing these patches? It's probably not because you
> have a desire to say -cpu Westmere when you run QEMU on your laptop.
> I'd wager to say that no human has ever done that or that if they
> had, they did so by accident because they read documentation and
> thought they had to.
I'd be glad if QEMU will chose -cpu Westmere for me if it detects
Westmere host CPU as a default.
> Humans probably do one of two things: 1) no cpu option or 2) -cpu host.
And both are not optimal. Actually both are bad. First one because
default cpu is very conservative and the second because there is no
guaranty that guest will continue to work after qemu or kernel upgrade.
Let me elaborate about the later. Suppose host CPU has kill_guest
feature and at the time a guest was installed it was not implemented by
kvm. Since it was not implemented by kvm it was not present in vcpu
during installation and the guest didn't install "workaround kill_guest"
module. Now unsuspecting user upgrades the kernel and tries to restart
the guest and fails. He writes angry letter to qemu-devel and is asked to
reinstall his guest and move along.
> So then why are you introducing -cpu Westmere? Because ovirt-engine
> has a concept of datacenters and the entire datacenter has to use a
> compatible CPU model to allow migration compatibility. Today, the
> interface that ovirt-engine exposes is based on CPU codenames.
> Presumably ovirt-engine wants to add a Westmere CPU group and as
> such have levied a requirement down the stack to QEMU.
First of all this is not about live migration only. Guest visible vcpu
should not change after guest reboot (or hibernate/resume) too. And
second this concept exists with only your laptop and single guest on it
too. There are three inputs into a "CPU model module": 1) host cpu, 2)
qemu capabilities, 3) kvm capabilities. With datacenters scenario all
three can change, with your laptop only last two can change (first one
can change too when you'll get new laptop) , but the net result is that
guest visible cpuid can change and it shouldn't. This is the goal of
introducing -cpu Westmere, to prevent it from happening.
> But there's no intrinsic reason why it uses CPU model names. VMware
> doesn't do this. It has a concept of compatibility groups.
As Andrew noted, not any more. There is no intrinsic reason, but people
are more familiar with Intel terminology than random hypervisor
> oVirt could just as well define compatibility groups like GroupA,
> GroupB, GroupC, etc. and then the -cpu option we would be discussing
> would be -cpu GroupA.
It could, but I can't see why this is less confusing.
> This is why it's a configuration option and not builtin to QEMU.
> It's a user interface as as such, should be defined at a higher
This is not the only configuration that is builtin in QEMU. As it stands
now QEMU does not even allow configuring cpuid enough to define those
compatibility groups outside of QEMU. And after the work is done to allow
enough configurability there is no much left to provide compatibility
groups in QEMU itself.
> Perhaps it really should be VDSM that is providing the model info to
> libvirt? Then they can add whatever groups then want whenever they
> want as long as we have the appropriate feature bits.
> P.S. I spent 30 minutes the other day helping a user who was
> attempting to figure out whether his processor was a Conroe, Penryn,
> etc. Making this determination is fairly difficult and it makes me
> wonder whether having CPU code names is even the best interface for
>  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991
> Anthony Liguori
> >(Also, there are additional low-level bits that really have to be
> >maintained somewhere, just to have sane defaults. Currently many CPUID
> >leafs are exposed to the guest without letting the user control them,
> >and worse: without keeping stability of guest-visible bits when
> >upgrading Qemu or the host kernel. And that's what machine-types are
> >for: to have sane defaults to be used as base.)
> >Let me give you a practical example: I had a bug report about improper
> >CPU topology information. After investigating it, I have found out
> >that the "level" cpudef field is too low; CPU core topology information
> >is provided on CPUID leaf 4, and most of the Intel CPU models on Qemu
> >have level=2 today (I don't know why). So, Qemu is responsible for
> >exposing CPU topology information set using '-smp' to the guest OS, but
> >libvirt would have to be responsible for choosing a proper "level" value
> >that makes that information visible to the guest. We can _allow_ libvirt
> >to fiddle with these low-level bits, of course, but requiring every
> >management layer to build this low-level information from scratch is
> >just a recipe to waste developer time.
> >(And I really hope that there's no plan to require all those low-level
> >bits to appear as-is on the libvirt XML definitions. Because that would
> >require users to read the Intel 64 and IA-32 Architectures Software
> >Developer's Manual, or the AMD64 Architecture Programmer's Manual and
> >BIOS and Kernel Developer's Guides, just to understand why something is
> >not working on his Virtual Machine.)
> > https://bugzilla.redhat.com/show_bug.cgi?id=689665
More information about the Arch