<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 10 Dec 2015, at 16:36, Yaniv Kaul <<a href="mailto:ykaul@redhat.com" class="">ykaul@redhat.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div class="gmail_extra"><div class="gmail_quote">On Thu, Dec 10, 2015 at 5:07 PM, Martin Polednik <span dir="ltr" class=""><<a href="mailto:mpolednik@redhat.com" target="_blank" class="">mpolednik@redhat.com</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello developers,<br class="">
<br class="">
tl;dr version:<br class="">
* deprecate report_host_threads_as_cores<br class="">
* remove cpuSockets, use sum(numaNodes.keys())<br class="">
* report threadsPerCore for ppc64le / report total number of threads<br class="">
for ppc64le<br class="">
* work on our naming issues<br class="">
<br class="">
I've been going over our capabilities reporting code in VDSM due to<br class="">
specific threading requirements on ppc64le platform and noticed few<br class="">
issues. Before trying to fix something that "works", I'm sending this<br class="">
mail to start a discussion regarding current and future state of the<br class="">
code.<br class="">
<br class="">
First thing is the terminology. What we consider cpu sockets, cores and threads are in fact NUMA cells, sum of cores present in NUMA<br class="">
nodes and the same for threads. I'd like to see the code moving in a<br class="">
direction that is correct in this sense.<br class=""></blockquote><div class=""><br class=""></div><div class="">Note that I think users are more familiar with sockets-cores-threads than NUMA cells, terminology-wise.</div></div></div></div></div></blockquote><div><br class=""></div></div><div>we do report numa separately today, and we should keep doing that. I consider it another level of detail/complexity which many users do not care about. </div><div>So we should keep both</div><div><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class="gmail_extra"><div class="gmail_quote"><div class=""> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br class="">
More important are the actual calculations. I believe we should draw<br class="">
an uncrossable line between cores and threads and not interfere with<br class="">
it at least on VDSM's side. That would mean deprecating<br class="">
report_host_threads_as_cores option. The actual algorithm used at<br class="">
present does calculate the numa cores and numa threads correctly given<br class="">
that there are no offline CPUs - most likely fine enough. We don't<br class="">
have to report the actual number of sockets though, as it is reported<br class="">
in numa* keys.<br class=""></blockquote><div class=""><br class=""></div><div class="">There is a reason for report_host_threads_as_cores option. I don't remember it right now, but it had to do with some limitation of some OS or license or something.</div><div class="">I don't think we should deprecate it.</div></div></div></div></div></blockquote><div><br class=""></div><div>the idea was to remove that option from VDSM conf (as it’s cumbersome to use), and rather report all relevant information so engine can decide later on whether to count it this way or another</div><div>Today it’s used as a simple “core multiplier” if your workload is running "good enough” in parallel on 2 threads within one core, we just consider it as additional available “cpu”. For some workloads where this assumption is not working well, and also for licensing or any other reason, you can disable it and see “half” of the cpus on x86 despite having HT enabled in BIOS.</div><div><br class=""></div>On PPC this is more tricky as MArtin says below - (threads are not able to run multiple VMs simultaneously) - so we need to push that decision from vdsm up the chain.</div><div><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br class="">
It does fail to provide us with information that can be used in<br class="">
ppc64le environment, where for POWER8 we want to run the host without<br class="">
SMT while VMs would have multiple CPUs assigned. There are various<br class="">
configurations of so-called subcores in POWER8, where each CPU core<br class="">
can contain 1, 2 or 4 subcores. This configuration must be taken in<br class="">
consideration as given e.g. 160 threads overall, it is possible to run<br class="">
either 20 VMs in smt8 mode, 40 VMs in smt4 mode or 80 VMs in smt2<br class="">
mode. We have to report either the total number of threads OR just the<br class="">
threadsPerCore setting, so the users know how many "CPUs" should be<br class="">
assigned to machines for optimal performance.<br class=""></blockquote></div></div></div></div></blockquote><div><br class=""></div>x per y sounds best to me</div><div>but I think it’s even more complicated, if we consider offline CPUs(we don’t do that today) then the default picture on POWER8 currently looks like 20 cores in 4 numa cells, 8 threads per core. SMT is disabled altogether, so CPUs 1-7,9-15,… are offline. So should we report them or not? On x86 I would not do that as they are administratively disabled and can’t be used, however on ppc since RHEL 7.2 they are dynamically enabled on demand (if the guest topology uses threads as well), so they should be reported as available (or "sort-of-available”:)</div><div><br class=""></div><div>still, I think we should go with simple "sockets, cores/socket, threads/core” numbers,</div><div>the rest needs to be computed or chosen from, based on additional detailed report of NUMA topology and online/offline CPU status</div><div>perhaps with different behavior/capabilities on x86 and on power</div><div><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class="gmail_extra"><div class="gmail_quote"><div class=""><br class=""></div><div class="">YAY... do we have a comparison what libvirt knows / looks at (or they ignore it altogether?)</div><div class="">Y.</div><div class=""> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br class="">
As always, I welcome any opinions regarding the proposed ideas. Also<br class="">
note that all of the changes can be done via deprecation to be fully<br class="">
backwards compatible - except for the ppc part.<br class="">
<br class="">
Regards,<br class="">
mpolednik<br class="">
_______________________________________________<br class="">
Devel mailing list<br class="">
<a href="mailto:Devel@ovirt.org" target="_blank" class="">Devel@ovirt.org</a><br class="">
<a href="http://lists.ovirt.org/mailman/listinfo/devel" rel="noreferrer" target="_blank" class="">http://lists.ovirt.org/mailman/listinfo/devel</a><br class="">
</blockquote></div><br class=""></div></div>
_______________________________________________<br class="">Devel mailing list<br class=""><a href="mailto:Devel@ovirt.org" class="">Devel@ovirt.org</a><br class="">http://lists.ovirt.org/mailman/listinfo/devel</div></blockquote></div><br class=""></body></html>