[Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
Gleb Natapov
gleb at redhat.com
Mon Mar 12 13:15:32 UTC 2012
On Mon, Mar 12, 2012 at 01:04:19PM +0000, Daniel P. Berrange wrote:
> On Mon, Mar 12, 2012 at 09:52:27AM -0300, Eduardo Habkost wrote:
> > On Sun, Mar 11, 2012 at 09:12:58AM -0500, Anthony Liguori wrote:
> > > On 03/11/2012 08:27 AM, Gleb Natapov wrote:
> > > >On Sat, Mar 10, 2012 at 12:24:47PM -0600, Anthony Liguori wrote:
> > > >>Let's step back here.
> > > >>
> > > >>Why are you writing these patches? It's probably not because you
> > > >>have a desire to say -cpu Westmere when you run QEMU on your laptop.
> > > >>I'd wager to say that no human has ever done that or that if they
> > > >>had, they did so by accident because they read documentation and
> > > >>thought they had to.
> >
> > No, it's because libvirt doesn't handle all the tiny small details
> > involved in specifying a CPU. All libvirty knows about are a set of CPU
> > flag bits, but it knows nothing about 'level', 'family', and 'xlevel',
> > but we would like to allow it to expose a Westmere-like CPU to the
> > guest.
>
> This is easily fixable in libvirt - so for the point of going discussion,
> IMHO, we can assume libvirt will support level, family, xlevel, etc.
>
And fill in all cpuid leafs by querying /dev/kvm when needed or, if TCG
is used, replicating QEMU logic? And since QEMU should be usable without
libvirt the same logic should be implemented in QEMU anyway.
--
Gleb.
More information about the Arch
mailing list