[ovirt-users] Huge pages in guest with newer oVirt versions

Gianluca Cecchi gianluca.cecchi at gmail.com
Thu Sep 28 14:18:52 UTC 2017


Sorry for late reply, I had not time to give feedback until now

On Mon, Sep 18, 2017 at 10:33 PM, Arik Hadas <ahadas at redhat.com> wrote:

>
>
> On Mon, Sep 18, 2017 at 10:50 PM, Martin Polednik <mpoledni at redhat.com>
> wrote:
>
>> The hugepages are no longer a hook, but part of the code base. They
>
> can be configured via engine property `hugepages`, where the value of
>> property is size of the pages in KiB (1048576 = 1G, 2048 = 2M).
>>
>
> Note that the question is about 4.1 and it doesn't seem like this change
> was backported to the 4.1 branch, right?
>

And in fact it seems I have not this in 4.1.5 engine:

# engine-config -l | grep -i huge
#

In case it is ok for upcoming 4.2/master, how am I supposed to use it? I
would like to use hugepages at VM level, not engine.
Or do you mean that in 4.2 if I set it and specify 2M for the engine
parameter named "hugepages", then automatically I will see a custom
property inside the VM config section, or where?
Any screenshot of this?

In the mean time I'm using the "old" style with the hook I found here:
http://resources.ovirt.org/pub/ovirt-4.1/rpm/el7/noarch/
vdsm-hook-qemucmdline-4.19.31-1.el7.centos.noarch.rpm
and
vdsm-hook-hugepages-4.19.31-1.el7.centos.noarch.rpm

It works but it seems not to be correctly integrated with what the hosts
sees...
an example
On hypervisor I set 90000 huge pages

In 3 VMs I want to configure 34Gb of Huge Pages and total memory of 64Gb,
so I set 17408 in their Huge Pages custom property
Before starting any VM on hypervisor I see

# cat /proc/meminfo |grep -i huge
AnonHugePages:         0 kB
HugePages_Total:   90000
HugePages_Free:    90000
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

When I start the first VM there is the first anomaly:
It becomes:
# cat /proc/meminfo |grep -i huge
AnonHugePages:         0 kB
HugePages_Total:   107408
HugePages_Free:    74640
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

So apparently it allocates 17408 further huge pages, without using the part
of the 90000 it already has free.
But I think this is actually a bug in what /proc shows and not real usage
(see below) perhaps?
Also, it seems it has allocated 64Gb, the entire size of the VM memory and
not only the 34Gb part...
I don't know if this is correct and in case expected... because eventually
I can choose to increase the number of huge pages of the VM..

Inside the VM vm1 itself it seems correct view:
[root at vm1 ~]# cat /proc/meminfo |grep -i huge
AnonHugePages:         0 kB
HugePages_Total:   17408
HugePages_Free:    17408
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Note that if I run again on host:
# sysctl -p /etc/sysctl.d/10-huge-pages.conf

it seems it adjusts itself.. decreasing the total huge pages that in theory
it is not possible...?

# cat /proc/meminfo |grep -i huge
AnonHugePages:         0 kB
HugePages_Total:   90000
HugePages_Free:    57232
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Again it seems it has allocated 32768 huge pages so 64Gb that is the total
memory of the VM,
I start now the second VM vm2:

At hypervisor level I have now:

# cat /proc/meminfo |grep -i huge
AnonHugePages:         0 kB
HugePages_Total:   107408
HugePages_Free:    41872
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

So again an increment of 17408 huge pages in the total line and a new
allocation of 64Gb of huge pages (total huge pages allocated 32768+32768)

BTW now the free output on host shows:
# free
              total        used        free      shared  buff/cache
 available
Mem:      264016436   233105820    29194036      190460     1716580
29747272
Swap:       4194300           0     4194300

with "only" 29Gb free and if I try to run the third VM vm3 I get in fact
the error message:

"
Error while executing action:

vm3:

   - Cannot run VM. There is no host that satisfies current scheduling
   constraints. See below for details:
   - The host ovirt1 did not satisfy internal filter Memory because its
   available memory is too low (33948 MB) to run the VM.

"
Again I run on host:
# sysctl -p /etc/sysctl.d/10-huge-pages.conf

The memory situation on host becomes:

# cat /proc/meminfo |grep -i huge
AnonHugePages:         0 kB
HugePages_Total:   90000
HugePages_Free:    24464
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

# free
              total        used        free      shared  buff/cache
 available
Mem:      264016436   197454740    64844616      190460     1717080
65398696
Swap:       4194300           0     4194300
[root at rhevora1 downloaded_from_upstream]#

And I can boot now the third VM vm3, with the memory ouput on host becoming:

# cat /proc/meminfo |grep -i huge
AnonHugePages:         0 kB
HugePages_Total:   107408
HugePages_Free:     9104
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

# free
              total        used        free      shared  buff/cache
 available
Mem:      264016436   233484996    28810096      190564     1721344
29365052
Swap:       4194300           0     4194300


If I "adjust" with"
# sysctl -p /etc/sysctl.d/10-huge-pages.conf
vm.hugetlb_shm_group = 36
vm.nr_hugepages = 90000

I get now:
# cat /proc/meminfo |grep -i huge
AnonHugePages:         0 kB
HugePages_Total:   98304
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:     8304
Hugepagesize:       2048 kB

# free
              total        used        free      shared  buff/cache
 available
Mem:      264016436   214838508    47453540      190564     1724388
48010936
Swap:       4194300           0     4194300

with a total of 90000 and 8304 pages in line "HugePages_Surp:  " for which
I don't ever know the exact meaning...

Any comment?
Thanks,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170928/c1319e2f/attachment.html>


More information about the Users mailing list