On Sun, Jun 17, 2018 at 3:18 PM, Sharon Gratch <sgratch(a)redhat.com> wrote:
Hi Gianluca,
in 4.2 we added a new engine custom property "hugepages" to replace the
VDSM hook, where the value of the property is the size of the pages in KiB
(1G = 1048576, 2M = 2048).
This property can be set for any VM profile type (not just to high
performance VM's).
Please see the following for more details (not all details are up to date,
but most are):
https://mpolednik.github.io/2017/06/26/hugepages-and-ovirt/
Regards,
Sharon
Hi Sharon,
thanks for the link I didn't know about.
There are a couple of things I have to comment
1) In my opinion it is in general a bad opinion to change meaning of an
already existing parameter/variable/setting.
In 4.1 the custom property hugepages was meant for the number of hugepages
to give to the guest
Now the same custom property is meant to indicate the size of them
It seems misleading and confusing.
Why not using the same name used by the kernel command line "hugepagesz"?
2) from the article it seems that the whole guest memory is allocated
through huge pages
Is this true?
In this case:
a) What happens if guest total memory > number of pages of that size
allocated on hypervisor?
b) In general the need/plus of having huge pages inside guest is driven by
an application running in it, not in the total ram of the guest itself.
In my case, I have one or more Oracle instances running and in general a
good tradeoff is to assign to Oracle instances a total of about 40% of the
operating system ram.
This ram associated to Oracle instances through their so-called SGA
So it would be nice to be able to specify a number of huge pages to
allocate to guest (in my case the sum of the SGAs of the Oracle RDBMS
instances I plan to run on it).
The rest is a waste of resources, especially if I plan to pre-allocate huge
pages on host.
A good candidate for the name would be again the kernel command line
"hugepages" that would return to having its original meaning
What about this?
3) Tomorrow I'm going to test anyway with an updated oVirt 4.2.3
environment.
In the mean time today I'm connected to a RHV 4.2 POC environment with
latest updates and RHV-H on the nodes (20180531 version)
When I set the kernel command line on host from the gui, actually I receive
the message that it needs to be "reinstalled" not only rebooted.
And indeed if I only put the host into maintenance and then reboot, I see
that the kernel command line has not changed.
And I'm still not able to run a VM with huge pages on it.
Even after rebooting I get an exclamation point telling that the host needs
to be reinstalled
If I put again into maintenance and reinstall, the host has been
reinstalled and activated again.
At this point I see that /boot/grub2/grub.cfg has been changed reflecting
the new kernel command line, but I think that instead of being activated,
it should be asked the admin to reboot the node, to have the changes
applied.
Another thing:
the file /etc/default/grub has remained unchanged, so I presume that a
future update would wipe these new settings, or not?
Or are the current kernel settings simply replicated, bypassing the default?
Does this reinstall requirement remain true in oVirt if I use ovirt-node-ng?
So after reinstalling the host and rebooting I have verified that on host
one should have allocated at least number of huge pages up to the total of
ram assigned to the VM.... as specified into the artcile, otherwise it
fails with the same message not enough free huge pages... to allocate ...
bad.. ;-(
In my case on a host I could have 3 VMs running (in general the load is
balanced between two hosts)
vm1 with 48 Gb of ram and need of 8200x2M huge pages (~ 16Gb)
vm2 with 24Gb of ram and need of 4100x2M huge pages (~ 8Gb)
vm3 with 64Gb of ram and need of 12300x2M hugee pages (~24Gb)
With the new model I should allocate 132Gb of huge pages on hosts, when to
manage huge pages for the SGA of 3 Oracle instances I would need only 48Gb
of them...
Do you confirm that in case of plain Linux Server host the reinstall is not
necessary?
For this late case (plain CentOS host) I can verify tomorrow. But I have
not at hand an oVirt environment with ovirt-node-ng
Thanks
Gianluca