<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sun, Nov 29, 2015 at 5:37 PM, Nir Soffer <span dir="ltr"><<a href="mailto:nsoffer@redhat.com" target="_blank">nsoffer@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>On Sun, Nov 29, 2015 at 10:37 AM, Yaniv Kaul <<a href="mailto:ykaul@redhat.com" target="_blank">ykaul@redhat.com</a>> wrote:<br>
><br>
> On Fri, Nov 27, 2015 at 6:55 PM, Francesco Romani <<a href="mailto:fromani@redhat.com" target="_blank">fromani@redhat.com</a>><br>
> wrote:<br>
>><br>
>> Using taskset, the ip command now takes a little longer to complete.<br>
><br>
><br>
> Since we always use the same set of CPUs, I assume using a mask (for 0 & 1,<br>
> just use 0x3, as the man suggests) might be a tiny of a fraction faster to<br>
> execute taskset with, instead of the need to translate the numeric CPU list.<br>
<br>
</span>Creating the string "0-<last cpu index>" is one line in vdsm. The code<br>
handling this in<br>
taskset is written in C, so the parsing time is practically zero. Even<br>
if it was non-zero,<br>
this code run once when we run a child process, so the cost is insignificant.<br></blockquote><div><br></div><div>I think it's easier to just to have it as a mask in a config item somewhere, without need to create it or parse it anywhere.</div><div>For us and for the user.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span><br>
> However, the real concern is making sure CPUs 0 & 1 are not really too busy<br>
> with stuff (including interrupt handling, etc.)<br>
<br>
</span>This code is used when we run a child process, to allow the child<br>
process to run on<br>
all cpus (in this case, cpu 0 and cpu 1). So I think there is no concern here.<br>
<br>
Vdsm itself is running by default on cpu 1, which should be less busy<br>
then cpu 0.<br></blockquote><div><br></div><div>I assume those are cores, which probably in a multi-socket will be in the first socket only.</div><div>There's a good chance that the FC and or network/cards will also bind their interrupts to core0 & core 1 (check /proc/interrupts) on the same socket.</div><div>From my poor laptop (1s, 4c):</div><div>42: 1487104 9329 4042 3598 IR-PCI-MSI 512000-edge 0000:00:1f.2<br></div><div><br></div><div>(my SATA controller)</div><div><br></div><div>43: 14664923 34 18 13 IR-PCI-MSI 327680-edge xhci_hcd<br></div><div>(my dock station connector)</div><div><br></div><div>45: 6754579 4437 2501 2419 IR-PCI-MSI 32768-edge i915<br></div><div>(GPU)</div><div><br></div><div>47: 187409 11627 1235 1259 IR-PCI-MSI 2097152-edge iwlwifi<br></div><div>(NIC, wifi)</div><div><br></div><div>Y.</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
The user can modify this configuration on the host, I guess we need to<br>
expose this<br>
on the engine side (cluster setting?).<br>
<br>
Also if vdsm is pinned to certain cpu, should user get a warning<br>
trying to pin a vm<br>
to this cpu?<br>
<br>
Michal, what do you think?<br>
<span><font color="#888888"><br>
Nir<br>
</font></span></blockquote></div><br></div></div>