<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Apr 20, 2017 at 10:35 AM, Michal Skrivanek <span dir="ltr"><<a href="mailto:mskrivan@redhat.com" target="_blank">mskrivan@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word"><br><div><span class="gmail-m_5930415786222383046gmail-"><blockquote type="cite"><div>On 19 Apr 2017, at 16:28, Gianluca Cecchi <<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>> wrote:</div><br class="gmail-m_5930415786222383046gmail-m_-6852186268654812432Apple-interchange-newline"><div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Apr 19, 2017 at 3:44 PM, Martin Polednik <span dir="ltr"><<a href="mailto:mpolednik@redhat.com" target="_blank">mpolednik@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="gmail-m_5930415786222383046gmail-m_-6852186268654812432HOEnZb"><div class="gmail-m_5930415786222383046gmail-m_-6852186268654812432h5"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
</blockquote>
<br></div></div>
If you are using recent CentOS (or I guess Fedora), there isn't any<br>
extra setup required. Just create the custom property:<br></blockquote><div><br></div><div>Both my engine and my hosts are CentOS 7.3 + updates</div></div></div></div></div></blockquote><div><br></div></span>that’s good</div><div><span class="gmail-m_5930415786222383046gmail-"><br><blockquote type="cite"><div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
On the host where engine is running:<br>
<br>
$ engine-config -s "UserDefinedVMProperties=hugep<wbr>ages=^.*$"<br>
$ service ovirt-engine restart<br>
<br>
and you should see 'hugepages' when editing a VM under custom properties.<br></blockquote><div><br></div><div>So no vdsm hook at all to install?</div></div></div></div></div></blockquote><div><br></div></span>today you still need the hook.</div><div><span class="gmail-m_5930415786222383046gmail-"><br><blockquote type="cite"><div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Set the number to (desired memory / 2048) and you're good to go. The<br>
VM will run with it's memory backed by hugepages. </blockquote><div><br></div><div>As in sysctl.conf? So that if I want 4Gb of Huge Pages I have to set 2048?</div></div></div></div></div></blockquote><div><br></div></span>yes. there might be some </div><div><span class="gmail-m_5930415786222383046gmail-"><br><blockquote type="cite"><div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">If you need<br>
hugepages even inside the VM, do whatever you would do on a physical<br>
host.<br>
<br>
mpolednik<div class="gmail-m_5930415786222383046gmail-m_-6852186268654812432HOEnZb"><div class="gmail-m_5930415786222383046gmail-m_-6852186268654812432h5"><br></div></div></blockquote><div><br></div><div>yes, the main subject is to have Huge Pages inside the guest, so that Oracle RDBMS at startup detect them and use them</div></div></div></div></div></blockquote><div><br></div></span>yes, so if you do that via sysctl.conf on real HW just do the same here, or modify kernel cmdline.</div><div><br></div><div>Note that those are two separate things</div><div>the hook is making QEMU process use hugepages memory in the host - that improves performance of any VM</div><div>then how it looks in guest is no concern to oVirt, it’s guest-side hugepages. You can enable/set them regardless the previous step, which may be fine if you just want to expose the capability to some app - e.g. in testing that the guest-side Oracle can work with hugepages in the guest.</div><div>But you probably want both Oracle to see hugepages and also actually use them - then you need both reserve that on host for qemu process and then inside guest reserve that for oracle. I.e. you need to add a “buffer” on host side to accommodate the non-hugepages parts of the guest e.g. on 24GB host you can reserve 20GB hugepages for VMs to use, and then run a VM with 20GB memory, reserving 16GB hugepages inside the guest for oracle to use.</div><div><br></div><div>Thanks,</div><div>michal</div><div><br><blockquote type="cite"><div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><br></div><div>Gianluca </div></div></div></div>
</div></blockquote></div><br></div></blockquote><div><br></div><div>I'm making some tests right now. </div><div>Steps done: </div><div>- configure huge pages on hypervisor</div><div><br></div><div><div>[root@ractor ~]# cat /etc/sysctl.d/huge-pages.conf </div><div># 20/04/2017 8Gb</div><div>vm.nr_hugepages = 4096</div><div>[root@ractor ~]# </div></div><div><br></div><div>rebooted host (I also updated in the mean time it to latest 4.1.1 packages with vdsm-4.19.10.1-1.el7.<wbr>centos.x86_64 and vdsm-hook-hugepages-4.19.<wbr>10.1-1.el7.centos.noarch)</div><div>I also set "transparent_hugepage=never" boot parameter because I know that they are in conflict with Huge Pages</div><div><br></div><div>So the situation is:</div><div><br></div><div><div>[root@ractor ~]# cat /proc/cmdline </div><div>BOOT_IMAGE=/vmlinuz-3.10.0-<wbr>514.16.1.el7.x86_64 root=/dev/mapper/centos-root ro <a href="http://rd.lvm.lv" target="_blank">rd.lvm.lv</a>=centos/root <a href="http://rd.lvm.lv" target="_blank">rd.lvm.lv</a>=centos/swap rhgb quiet LANG=en_US.UTF-8 transparent_hugepage=never</div><div>[root@ractor ~]# </div></div><div><br></div><div><div>[root@ractor ~]# cat /proc/meminfo | grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 4096</div><div>HugePages_Free: 4096</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div><div>[root@ractor ~]# </div></div><div><br></div><div>I edited a pre-existing CentOS 6 VM setting for it 8Gb of ram and 2048 pages (4Gb) in custom property forhugepages.</div><div><br></div><div>When I power on I get this addition in qemu-kvm process definition as expected:</div><div><br></div><div>-mem-path /dev/hugepages/libvirt/qemu<br></div><div><br></div><div>I noticed that now I have on host....</div><div><br></div><div><div>[root@ractor vdsm]# cat /proc/meminfo | grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 6144</div><div>HugePages_Free: 2048</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div><div>[root@ractor vdsm]# </div></div><div><br></div><div>So apparently it did allocated 2048 new huge pages...</div></div></div><div class="gmail_extra"><div>Does it mean that actually I have not to pre-allocate huge pages at all on host and it eventually will increase them (but not able to remove then I suppose) ?</div><div><br></div><div>Anyway the count doesn't seem correct... because it seems that a total of 4096 pages are in use/locked... (HugePages_Total - HugePages_Free + HugePages_Rsvd)</div><div>while they should be 2048.....</div><div><br></div><div><div>[root@ractor vdsm]# ll /dev/hugepages/libvirt/qemu/</div><div>total 0</div><div>[root@ractor vdsm]# ll /hugetlbfs/libvirt/qemu/</div><div>total 0</div><div>[root@ractor vdsm]# </div></div><div><br></div><div>If I power off the VM</div><div><br></div><div><div>[root@ractor vdsm]# cat /proc/meminfo | grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 4096</div><div>HugePages_Free: 4096</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div><div>[root@ractor vdsm]# </div></div><div><br></div><div>Does this mean that in CentOS 7.3 Huge Pages could be reclaimed....???</div><div><br></div><div>Nevertheless, when I configure huge pages in guest it seems to work as expected</div><div><br></div><div><div>[root@dbtest ~]# cat /proc/meminfo | grep -i huge</div><div>AnonHugePages: 0 kB</div><div>HugePages_Total: 2048</div><div>HugePages_Free: 2048</div><div>HugePages_Rsvd: 0</div><div>HugePages_Surp: 0</div><div>Hugepagesize: 2048 kB</div><div><br></div></div><div>Going into Oracle DB initialization, after configuring its dedicated memory (SGA) to 2354Mb, I get this confirmation inside its log file</div><div><br></div><div><div>Thu Apr 20 17:16:27 2017</div><div> Per process system memlock (soft) limit = 4096M</div><div>Thu Apr 20 17:16:27 2017</div><div> Expected per process system memlock (soft) limit to lock</div><div> SHARED GLOBAL AREA (SGA) into memory: 2354M</div><div>Thu Apr 20 17:16:27 2017</div><div> Available system pagesizes:</div><div> 4K, 2048K </div><div>Thu Apr 20 17:16:27 2017</div><div> Supported system pagesize(s):</div><div>Thu Apr 20 17:16:27 2017</div><div> <b>PAGESIZE</b> AVAILABLE_PAGES EXPECTED_PAGES <b>ALLOCATED_PAGES</b> ERROR(s)</div><div>Thu Apr 20 17:16:27 2017</div><div> 4K Configured 3 3 NONE</div><div>Thu Apr 20 17:16:27 2017</div><div> <b>2048K </b> 2048 1177 <b>1177</b> NONE</div></div><div><br></div><div>Gianluca</div><div><br></div></div></div>