[Engine-devel] CPU topology in the API

Hi list, The current status is that though they look the same, CPU topology for hosts and VMs differ. In both you have <topology cores="N" sockets="Y"/> for hosts: Cores = Total cores on the host, Y=number of sockets for VMs: Cores = Cores per socket, Y=number of sockets This means that for a hosts that has 4 sockets with 8 cores per socket the topology will be presented as: <topology cores="32" sockets="4"/> While for VM with the same requested topology will show: <topology cores="8" sockets="4"/> I think they should not be different to avoid confusion but: * The information we displayed for the host can't count on the fact that cores are distributed evenly across sockets, because theoretically a host could contain several different CPUs, so it can't be displayed as a multiplication. * On the other hand changing the VM topology will break the API though it will make it aligned both with hosts and with the 'New VM' dialogue in the UI. For oVirt 3.x it may be that nothing can be changed however for oVirt it is allowed in theory to break the API (a bit :)) so the options as I see it are: 1. Don't touch, leave the confusion. 2. Make host align to VMs with the risk that on some hosts this may be a bit misleading - should be rare. 3. Make host topology looks like VM but allow multiple CPU topologies in the CPUs sub collection of the host. (This also requires change in VDSM API) 4. Align VMs to Hosts I would go for 4 or 2 Current CPU topology for the hosts is a new commit, thus it may be allowed to change it now since no one is using it yet. This works in favour of 2. In any case only 3 discloses all the information in all possible cases. Thoughts? Thanks, Simon.

On 04/30/2012 02:37 PM, Simon Grinberg wrote:
Hi list,
The current status is that though they look the same, CPU topology for hosts and VMs differ.
In both you have <topology cores="N" sockets="Y"/>
for hosts: Cores = Total cores on the host, Y=number of sockets for VMs: Cores = Cores per socket, Y=number of sockets
This means that for a hosts that has 4 sockets with 8 cores per socket the topology will be presented as: <topology cores="32" sockets="4"/> While for VM with the same requested topology will show: <topology cores="8" sockets="4"/>
I think they should not be different to avoid confusion but:
* The information we displayed for the host can't count on the fact that cores are distributed evenly across sockets, because theoretically a host could contain several different CPUs, so it can't be displayed as a multiplication.
Isn't that rare to not-exist in the standard case?
* On the other hand changing the VM topology will break the API though it will make it aligned both with hosts and with the 'New VM' dialogue in the UI.
For oVirt 3.x it may be that nothing can be changed however for oVirt it is allowed in theory to break the API (a bit :)) so the options as I see it are:
1. Don't touch, leave the confusion. 2. Make host align to VMs with the risk that on some hosts this may be a bit misleading - should be rare. 3. Make host topology looks like VM but allow multiple CPU topologies in the CPUs sub collection of the host. (This also requires change in VDSM API) 4. Align VMs to Hosts
I would go for 4 or 2 Current CPU topology for the hosts is a new commit, thus it may be allowed to change it now since no one is using it yet. This works in favour of 2. In any case only 3 discloses all the information in all possible cases.
Thoughts?
If you touch any of this, please be prepared for the future: - Use hyperthreads (exists in kvm today) - Add numa topology for guest and the hosts - Add Cache info (for VMs too) - Add number of DIMMs, especially for VMs, for the upcoming memory hot plug feature. - BE ready for cpu/cpu/memory hotplug for the hosts and the guests.
Thanks, Simon.
_______________________________________________ Engine-devel mailing list Engine-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/engine-devel

On 04/30/2012 02:43 PM, Dor Laor wrote:
I would go for 4 or 2 Current CPU topology for the hosts is a new commit, thus it may be allowed to change it now since no one is using it yet. This works in favour of 2. In any case only 3 discloses all the information in all possible cases.
Thoughts?
If you touch any of this, please be prepared for the future: - Use hyperthreads (exists in kvm today) - Add numa topology for guest and the hosts - Add Cache info (for VMs too) - Add number of DIMMs, especially for VMs, for the upcoming memory hot plug feature. - BE ready for cpu/cpu/memory hotplug for the hosts and the guests.
Maybe there will be a time where we need this, but for now we just need the total number of cores. This is important for capacity planning. For CPU, these capacity planning tools simply add up all the fractional core usages for all VMs in a cluster, and then it has to be smaller than the number of total cores in the cluster. So it's important to know the total number of cores per host. For a VM, the number of cores is sockets*cores. For a host, it is just "cores". If you would use socket*cores for a host, you'll get too many. This isn't obvious and in my view it's a bug. Simon, I don't really see this as an API breaker by the way. The number of cores for a host was always cores*sockets even on 3.0, just that sockets was always equal to 1 (implicitly). Regards, Geert

----- Original Message -----
From: "Geert Jansen" <gjansen@redhat.com> To: dlaor@redhat.com, "Simon Grinberg" <simon@redhat.com> Cc: engine-devel@ovirt.org Sent: Monday, April 30, 2012 5:19:41 PM Subject: Re: [Engine-devel] CPU topology in the API
On 04/30/2012 02:43 PM, Dor Laor wrote:
I would go for 4 or 2 Current CPU topology for the hosts is a new commit, thus it may be allowed to change it now since no one is using it yet. This works in favour of 2. In any case only 3 discloses all the information in all possible cases.
Thoughts?
If you touch any of this, please be prepared for the future: - Use hyperthreads (exists in kvm today) - Add numa topology for guest and the hosts - Add Cache info (for VMs too) - Add number of DIMMs, especially for VMs, for the upcoming memory hot plug feature. - BE ready for cpu/cpu/memory hotplug for the hosts and the guests.
Maybe there will be a time where we need this, but for now we just need the total number of cores. This is important for capacity planning. For CPU, these capacity planning tools simply add up all the fractional core usages for all VMs in a cluster, and then it has to be smaller than the number of total cores in the cluster. So it's important to know the total number of cores per host.
For a VM, the number of cores is sockets*cores. For a host, it is just "cores". If you would use socket*cores for a host, you'll get too many. This isn't obvious and in my view it's a bug.
Simon, I don't really see this as an API breaker by the way. The number of cores for a host was always cores*sockets even on 3.0, just that sockets was always equal to 1 (implicitly).
Yes, however before the fix the API only reported cores (while the socket as you mentioned where implicitly 1), so there was no confusion. Now it reports cores and sockets but the meaning differs then the VM topology though it looks the same. So for VMs the tools will have to calculate cores*sockets For hosts they'll have to ignore the sockets parameter.
Regards, Geert

On 04/30/2012 05:19 PM, Geert Jansen wrote:
If you touch any of this, please be prepared for the future: - Use hyperthreads (exists in kvm today) - Add numa topology for guest and the hosts - Add Cache info (for VMs too) - Add number of DIMMs, especially for VMs, for the upcoming memory hot plug feature. - BE ready for cpu/cpu/memory hotplug for the hosts and the guests.
Maybe there will be a time where we need this, but for now we just need the total number of cores. This is important for capacity planning. For CPU, these capacity planning tools simply add up all the fractional core usages for all VMs in a cluster, and then it has to be smaller than the number of total cores in the cluster. So it's important to know the total number of cores per host.
For a VM, the number of cores is sockets*cores. For a host, it is just "cores". If you would use socket*cores for a host, you'll get too many. This isn't obvious and in my view it's a bug.
Simon, I don't really see this as an API breaker by the way. The number of cores for a host was always cores*sockets even on 3.0, just that sockets was always equal to 1 (implicitly).
Apologies for replying to my own email, but based on this, my vote would be for Simon's option #2 (make <host> like <vm>). I am not so concerned about API breakage as in 3.0 a properly written client would, in my view, get the "sockets" attribute of the CPU topology, and set it to "1" if it is not present. If ISVs currently do not do this, then it's an easy fix to do for them. Regarding CPUs with different core counts, i think this is very rare and not something we should worry about initially. Regards, Geert

On 04/30/2012 05:19 PM, Geert Jansen wrote:
On 04/30/2012 02:43 PM, Dor Laor wrote:
I would go for 4 or 2 Current CPU topology for the hosts is a new commit, thus it may be allowed to change it now since no one is using it yet. This works in favour of 2. In any case only 3 discloses all the information in all possible cases.
Thoughts?
If you touch any of this, please be prepared for the future: - Use hyperthreads (exists in kvm today) - Add numa topology for guest and the hosts - Add Cache info (for VMs too) - Add number of DIMMs, especially for VMs, for the upcoming memory hot plug feature. - BE ready for cpu/cpu/memory hotplug for the hosts and the guests.
Maybe there will be a time where we need this, but for now we just need the total number of cores. This is important for capacity planning. For
Other products and kvm (w/ the exception of memory hotplug which is not yet upstream) do support all of the above. If someone changes the api, it's time to add the missing features or at least add a place holder for them.
CPU, these capacity planning tools simply add up all the fractional core usages for all VMs in a cluster, and then it has to be smaller than the number of total cores in the cluster. So it's important to know the total number of cores per host.
For a VM, the number of cores is sockets*cores. For a host, it is just "cores". If you would use socket*cores for a host, you'll get too many. This isn't obvious and in my view it's a bug.
Simon, I don't really see this as an API breaker by the way. The number of cores for a host was always cores*sockets even on 3.0, just that sockets was always equal to 1 (implicitly).
Regards, Geert
participants (3)
-
Dor Laor
-
Geert Jansen
-
Simon Grinberg