<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">Sorry for late reply, I had not time to give feedback until now</div><div class="gmail_quote"> </div><div class="gmail_quote">On Mon, Sep 18, 2017 at 10:33 PM, Arik Hadas <span dir="ltr"><<a href="mailto:ahadas@redhat.com" target="_blank">ahadas@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="gmail-">On Mon, Sep 18, 2017 at 10:50 PM, Martin Polednik <span dir="ltr"><<a href="mailto:mpoledni@redhat.com" target="_blank">mpoledni@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left:1px solid rgb(204,204,204)">The hugepages are no longer a hook, but part of the code base. They</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left:1px solid rgb(204,204,204)">
can be configured via engine property `hugepages`, where the value of<br>
property is size of the pages in KiB (1048576 = 1G, 2048 = 2M).<br></blockquote><div><br></div></span><div><div>Note that the question is about 4.1 and it doesn't seem like this change was backported to the 4.1 branch, right?</div></div></div></div></div></blockquote><div><br></div><div>And in fact it seems I have not this in 4.1.5 engine:</div><div> </div><div># engine-config -l | grep -i huge</div><div>#</div><div><br></div><div>In case it is ok for upcoming 4.2/master, how am I supposed to use it? I would like to use hugepages at VM level, not engine.</div><div>Or do you mean that in 4.2 if I set it and specify 2M for the engine parameter named "hugepages", then automatically I will see a custom property inside the VM config section, or where?</div><div>Any screenshot of this? </div><div><br></div><div>In the mean time I'm using the "old" style with the hook I found here:</div><div><div><div><a href="http://resources.ovirt.org/pub/ovirt-4.1/rpm/el7/noarch/">http://resources.ovirt.org/pub/ovirt-4.1/rpm/el7/noarch/</a></div></div></div><div>vdsm-hook-qemucmdline-4.19.31-1.el7.centos.noarch.rpm<br></div><div>and</div><div>vdsm-hook-hugepages-4.19.31-1.el7.centos.noarch.rpm<br></div><div><br></div><div>It works but it seems not to be correctly integrated with what the hosts sees...</div><div>an example</div><div>On hypervisor I set 90000 huge pages</div><div><br></div><div>In 3 VMs I want to configure 34Gb of Huge Pages and total memory of 64Gb, so I set 17408 in their Huge Pages custom property</div><div>Before starting any VM on hypervisor I see</div><div><br></div><div><div># cat /proc/meminfo |grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 90000</div><div>HugePages_Free: 90000</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div></div><div><br></div><div>When I start the first VM there is the first anomaly:</div><div>It becomes:</div><div><div># cat /proc/meminfo |grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 107408</div><div>HugePages_Free: 74640</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div></div><div><br></div><div>So apparently it allocates 17408 further huge pages, without using the part of the 90000 it already has free.</div><div>But I think this is actually a bug in what /proc shows and not real usage (see below) perhaps?</div><div>Also, it seems it has allocated 64Gb, the entire size of the VM memory and not only the 34Gb part...</div><div>I don't know if this is correct and in case expected... because eventually I can choose to increase the number of huge pages of the VM..</div><div><br></div><div>Inside the VM vm1 itself it seems correct view:</div><div><div>[root@vm1 ~]# cat /proc/meminfo |grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 17408</div><div>HugePages_Free: 17408</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div></div><div><br></div><div>Note that if I run again on host:</div><div># sysctl -p /etc/sysctl.d/10-huge-pages.conf <br></div><div><br></div><div>it seems it adjusts itself.. decreasing the total huge pages that in theory it is not possible...?</div><div><br></div><div><div># cat /proc/meminfo |grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 90000</div><div>HugePages_Free: 57232</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div></div><div><br></div><div>Again it seems it has allocated 32768 huge pages so 64Gb that is the total memory of the VM,</div><div>I start now the second VM vm2:</div><div><br></div><div>At hypervisor level I have now:</div><div><br></div><div><div># cat /proc/meminfo |grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 107408</div><div>HugePages_Free: 41872</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div></div><div><br></div><div>So again an increment of 17408 huge pages in the total line and a new allocation of 64Gb of huge pages (total huge pages allocated 32768+32768)</div><div><br></div><div>BTW now the free output on host shows:</div><div><div># free</div><div> total used free shared buff/cache available</div><div>Mem: 264016436 233105820 29194036 190460 1716580 29747272</div><div>Swap: 4194300 0 4194300</div></div><div><br></div><div>with "only" 29Gb free and if I try to run the third VM vm3 I get in fact the error message:</div><div><br></div><div>"</div><div>Error while executing action: <br><br>vm3:<ul style="margin-top:0px"><li>Cannot run VM. There is no host that satisfies current scheduling constraints. See below for details:</li><li>The
host ovirt1 did not satisfy internal filter Memory
because its available memory is too low (33948 MB) to run the VM.</li></ul></div><div>"</div><div>Again I run on host:</div><div><div># sysctl -p /etc/sysctl.d/10-huge-pages.conf </div></div><div><br></div><div>The memory situation on host becomes:</div><div><br></div><div><div># cat /proc/meminfo |grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 90000</div><div>HugePages_Free: 24464</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div><div><br></div><div># free</div><div> total used free shared buff/cache available</div><div>Mem: 264016436 197454740 64844616 190460 1717080 65398696</div><div>Swap: 4194300 0 4194300</div><div>[root@rhevora1 downloaded_from_upstream]# </div></div><div><br></div><div>And I can boot now the third VM vm3, with the memory ouput on host becoming:</div><div><br></div><div><div># cat /proc/meminfo |grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 107408</div><div>HugePages_Free: 9104</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div><div><br></div><div># free</div><div> total used free shared buff/cache available</div><div>Mem: 264016436 233484996 28810096 190564 1721344 29365052</div><div>Swap: 4194300 0 4194300</div></div><div><br></div><div><br></div><div>If I "adjust" with"</div><div><div># sysctl -p /etc/sysctl.d/10-huge-pages.conf </div><div>vm.hugetlb_shm_group = 36</div><div>vm.nr_hugepages = 90000</div></div><div><br></div><div>I get now:</div><div><div># cat /proc/meminfo |grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 98304</div><div>HugePages_Free: 0</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 8304</div><div>Hugepagesize: 2048 kB</div></div><div><br></div><div><div># free</div><div> total used free shared buff/cache available</div><div>Mem: 264016436 214838508 47453540 190564 1724388 48010936</div><div>Swap: 4194300 0 4194300</div></div><div><br></div><div>with a total of 90000 and 8304 pages in line "HugePages_Surp: " for which I don't ever know the exact meaning...</div><div><br></div><div>Any comment?</div><div>Thanks,</div><div>Gianluca</div><div><br></div></div></div></div>