[ovirt-users] Question about Huge Pages
Michal Skrivanek
mskrivan at redhat.com
Fri Apr 21 08:44:04 UTC 2017
> On 20 Apr 2017, at 17:39, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote:
>
> On Thu, Apr 20, 2017 at 10:35 AM, Michal Skrivanek <mskrivan at redhat.com <mailto:mskrivan at redhat.com>> wrote:
>
>> On 19 Apr 2017, at 16:28, Gianluca Cecchi <gianluca.cecchi at gmail.com <mailto:gianluca.cecchi at gmail.com>> wrote:
>>
>> On Wed, Apr 19, 2017 at 3:44 PM, Martin Polednik <mpolednik at redhat.com <mailto:mpolednik at redhat.com>> wrote:
>>
>>
>> If you are using recent CentOS (or I guess Fedora), there isn't any
>> extra setup required. Just create the custom property:
>>
>> Both my engine and my hosts are CentOS 7.3 + updates
>
> that’s good
>
>>
>>
>> On the host where engine is running:
>>
>> $ engine-config -s "UserDefinedVMProperties=hugepages=^.*$"
>> $ service ovirt-engine restart
>>
>> and you should see 'hugepages' when editing a VM under custom properties.
>>
>> So no vdsm hook at all to install?
>
> today you still need the hook.
>
>>
>>
>> Set the number to (desired memory / 2048) and you're good to go. The
>> VM will run with it's memory backed by hugepages.
>>
>> As in sysctl.conf? So that if I want 4Gb of Huge Pages I have to set 2048?
>
> yes. there might be some
>
>>
>>
>> If you need
>> hugepages even inside the VM, do whatever you would do on a physical
>> host.
>>
>> mpolednik
>>
>>
>> yes, the main subject is to have Huge Pages inside the guest, so that Oracle RDBMS at startup detect them and use them
>
> yes, so if you do that via sysctl.conf on real HW just do the same here, or modify kernel cmdline.
>
> Note that those are two separate things
> the hook is making QEMU process use hugepages memory in the host - that improves performance of any VM
> then how it looks in guest is no concern to oVirt, it’s guest-side hugepages. You can enable/set them regardless the previous step, which may be fine if you just want to expose the capability to some app - e.g. in testing that the guest-side Oracle can work with hugepages in the guest.
> But you probably want both Oracle to see hugepages and also actually use them - then you need both reserve that on host for qemu process and then inside guest reserve that for oracle. I.e. you need to add a “buffer” on host side to accommodate the non-hugepages parts of the guest e.g. on 24GB host you can reserve 20GB hugepages for VMs to use, and then run a VM with 20GB memory, reserving 16GB hugepages inside the guest for oracle to use.
>
> Thanks,
> michal
>
>>
>> Gianluca
>
>
> I'm making some tests right now.
> Steps done:
> - configure huge pages on hypervisor
>
> [root at ractor ~]# cat /etc/sysctl.d/huge-pages.conf
> # 20/04/2017 8Gb
> vm.nr_hugepages = 4096
> [root at ractor ~]#
>
> rebooted host (I also updated in the mean time it to latest 4.1.1 packages with vdsm-4.19.10.1-1.el7.centos.x86_64 and vdsm-hook-hugepages-4.19.10.1-1.el7.centos.noarch)
> I also set "transparent_hugepage=never" boot parameter because I know that they are in conflict with Huge Pages
>
> So the situation is:
>
> [root at ractor ~]# cat /proc/cmdline
> BOOT_IMAGE=/vmlinuz-3.10.0-514.16.1.el7.x86_64 root=/dev/mapper/centos-root ro rd.lvm.lv <http://rd.lvm.lv/>=centos/root rd.lvm.lv <http://rd.lvm.lv/>=centos/swap rhgb quiet LANG=en_US.UTF-8 transparent_hugepage=never
> [root at ractor ~]#
>
> [root at ractor ~]# cat /proc/meminfo | grep -i huge
> AnonHugePages: 0 kB
> HugePages_Total: 4096
> HugePages_Free: 4096
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> [root at ractor ~]#
>
> I edited a pre-existing CentOS 6 VM setting for it 8Gb of ram and 2048 pages (4Gb) in custom property forhugepages.
>
> When I power on I get this addition in qemu-kvm process definition as expected:
>
> -mem-path /dev/hugepages/libvirt/qemu
>
> I noticed that now I have on host....
>
> [root at ractor vdsm]# cat /proc/meminfo | grep -i huge
> AnonHugePages: 0 kB
> HugePages_Total: 6144
> HugePages_Free: 2048
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> [root at ractor vdsm]#
>
> So apparently it did allocated 2048 new huge pages...
> Does it mean that actually I have not to pre-allocate huge pages at all on host and it eventually will increase them (but not able to remove then I suppose) ?
>
> Anyway the count doesn't seem correct... because it seems that a total of 4096 pages are in use/locked... (HugePages_Total - HugePages_Free + HugePages_Rsvd)
> while they should be 2048.....
>
> [root at ractor vdsm]# ll /dev/hugepages/libvirt/qemu/
> total 0
> [root at ractor vdsm]# ll /hugetlbfs/libvirt/qemu/
> total 0
> [root at ractor vdsm]#
>
> If I power off the VM
>
> [root at ractor vdsm]# cat /proc/meminfo | grep -i huge
> AnonHugePages: 0 kB
> HugePages_Total: 4096
> HugePages_Free: 4096
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
> [root at ractor vdsm]#
>
> Does this mean that in CentOS 7.3 Huge Pages could be reclaimed....???
it tries to…well, as I said, the hook is being improved right now and in 4.2 it will likely be more consumable
>
> Nevertheless, when I configure huge pages in guest it seems to work as expected
>
> [root at dbtest ~]# cat /proc/meminfo | grep -i huge
> AnonHugePages: 0 kB
> HugePages_Total: 2048
> HugePages_Free: 2048
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
>
> Going into Oracle DB initialization, after configuring its dedicated memory (SGA) to 2354Mb, I get this confirmation inside its log file
Yes, but that would always work even without setting things up on host. The “only” difference would be the actual performance.
>
> Thu Apr 20 17:16:27 2017
> Per process system memlock (soft) limit = 4096M
> Thu Apr 20 17:16:27 2017
> Expected per process system memlock (soft) limit to lock
> SHARED GLOBAL AREA (SGA) into memory: 2354M
> Thu Apr 20 17:16:27 2017
> Available system pagesizes:
> 4K, 2048K
> Thu Apr 20 17:16:27 2017
> Supported system pagesize(s):
> Thu Apr 20 17:16:27 2017
> PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s)
> Thu Apr 20 17:16:27 2017
> 4K Configured 3 3 NONE
> Thu Apr 20 17:16:27 2017
> 2048K 2048 1177 1177 NONE
>
> Gianluca
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170421/b878e88d/attachment.html>
More information about the Users
mailing list