[Engine-devel] CPU Overcommit Feature
Doron Fediuck
dfediuck at redhat.com
Tue Dec 18 18:33:32 UTC 2012
----- Original Message -----
> From: "Dennis Jacobfeuerborn" <dennisml at conversis.de>
> To: "Andrew Cathrow" <acathrow at redhat.com>
> Cc: engine-devel at ovirt.org
> Sent: Tuesday, December 18, 2012 7:59:26 PM
> Subject: Re: [Engine-devel] CPU Overcommit Feature
>
> On 12/18/2012 06:33 PM, Andrew Cathrow wrote:
> >
> >
> > ----- Original Message -----
> >> From: "Dennis Jacobfeuerborn" <dennisml at conversis.de>
> >> To: engine-devel at ovirt.org
> >> Sent: Tuesday, December 18, 2012 12:30:34 PM
> >> Subject: Re: [Engine-devel] CPU Overcommit Feature
> >>
> >> On 12/17/2012 07:13 PM, Simon Grinberg wrote:
> >>>
> >>>
> >>> ----- Original Message -----
> >>>> From: "Greg Padgett" <gpadgett at redhat.com>
> >>>> To: "engine-devel" <engine-devel at ovirt.org>
> >>>> Sent: Monday, December 17, 2012 4:37:57 PM
> >>>> Subject: [Engine-devel] CPU Overcommit Feature
> >>>>
> >>>> Hi,
> >>>>
> >>>> I've been working on a feature to allow CPU Overcommitment of
> >>>> hosts
> >>>> in a
> >>>> cluster. This first stage allows the engine to consider host
> >>>> cpu
> >>>> threads
> >>>> as cores for the purposes of VM resource allocation.
> >>>>
> >>>> This wiki page has further details, your comments are welcome!
> >>>> http://www.ovirt.org/Features/cpu_overcommit
> >>>
> >>> Basically looking good.
> >>> Hyperthread though is vendor specific.
> >>>
> >>> For AMD it's Clustered Multi-Thread while for Intel it's
> >>> Hyper-Thread
> >>> Official name is simultaneous multithreading (SMT) but no one
> >>> outside of the academy will recognize that.
> >>>
> >>> in libvirt if I read it right it's <attribute
> >>> name='thread_siblings'>
> >>>
> >>> So why not just call it threads.
> >>> We'll have cpuSockets, cpiCores, and cpuThreads, should be clear
> >>> when in CPU context.
> >>>
> >>> In the GUI just change hyperthreads to CPU threads. While in the
> >>> tool tip explain that it's either AMD Clustered Multi-Thread or
> >>> Intel Hyperthread
> >>
> >> Does this affect only the number of potential vCpus for the guests
> >> or
> >> does
> >> this also have an impact on the actual scheduling?
> >> So far I always disabled HT out of fear that a 2 vCpu guest might
> >> actually
> >> be scheduled to run in 2 threads of the same core but now I'm not
> >> so
> >> sure
> >> anymore. In the HT case does KVM know that two threads belong to
> >> the
> >> same
> >> core and will it only schedule its vCpus on distinct cores? Is
> >> there
> >> some
> >> documentation about this somewhere?
> >
> > This is about the maximum number of vCPUs we can give to a VM.
> > If the machine has 32 Physical cores that are hyperthreaded then do
> > we say the max number of vCPUs for a single VM is 32 or 64.
>
> If the actual scheduling of vCPUs cannot distinguish between threads
> and
> cores then why would you even want to limit yourself to 32 in you
> example?
> In that case the scheduling might end up being inefficient no matter
> how
> many vCPUs you assign to a guest so why restrict the user? (You might
> of
> course want to limit the user for policy reasons but that has nothing
> to to
> with the thread/core topic.)
>
> On the other hand if KVM does only schedule the vCPUs on distinct
> cores and
> extending the count from 32 to 64 implies that this distinction is to
> be
> disabled then this will have a performance impact for the guest.
> In that case I might want to limit the guests to just the 32 physical
> cores
> so two vCPUs of a single guest don't get scheduled on two threads of
> the
> same core.
>
> I've never really looked that closely into the scheduling issue but
> it
> might play a role here so I asked if someone had any pointers to
> infos
> about how exactly KVM makes its scheduling decisions.
>
> Regards,
> Dennis
>
Dennis,
first of all every virtual cpu is basically a qemu-thread which can
run on any cpu-thread. The scheduling is done by the kernel scheduler,
while we may control it using cpu pinning. ie- you may ask for
specific vcpu to run on a specific thread which is from the OS
point of view another core.
Indeed there are cases where this is not recommended, but other
cases where this will actually give you a performance boost,
as L1 cache is being shared by the sibling threads.
So it's really up to you to test your workload and decide id you
wish to utilize it or not. We're giving you powerful tools, and
you can decide if and how to use it.
Doron
More information about the Engine-devel
mailing list