Hi Lucia,
thanks for you suggestion. I made several experiments with and without hugepages. It seems
that, when oVirt calculates the amount of available memory on the NUMA nodes, it does not
keep into consideration the reserved hugepages. For example, this is the result of
"numactl --hardware" on my host in this moment:
----------------
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52
54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108
110
node 0 size: 257333 MB
node 0 free: 106431 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53
55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109
111
node 1 size: 257993 MB
node 1 free: 142009 MB
node distances:
node 0 1
0: 10 21
1: 21 10
-------------------
I am able to launch my VM only if I set its Memory Size to be 210 GB or less (i.e., the
double of the available RAM in node 0, which is the node with less available RAM). I tried
several times with different amount of free memory, and this behavior seems to be
consistent. Note that, once started, the VM consumes the pre-allocated reserved hugepages,
and has almost no impact on the free memory.
Do you think there is a reason why oVirt behaves in this ways, or is this a bug that I
could signal in the github repo ?
Thanks a lot again,
--gianluca