Combining vNUMA and dedicated CPU Pinning Policy

Hello, as stated in the subject, I am trying to combine the "dedicated" CPU Pinning Policy with vNUMA. My host has 2 sockets, 28 cores per socket and 2 threads per core. In each socket I allocated 128 hugepages of 1GB each. This is the output of "numactl --hardware": ------------ available: 2 nodes (0-1) node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108 110 node 0 size: 257333 MB node 0 free: 122518 MB node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109 111 node 1 size: 257993 MB node 1 free: 124972 MB node distances: node 0 1 0: 10 21 1: 21 10 ----------- I would like to dedicate one half of all the cores to a single guest. I created a VM with 2 sockets, 14 cores per socket and 2 threads per core, using the "dedicated" CPU Pinning Policy. If a set "NUMA Node Count" to zero, the VM starts and works flawlessy. However, all virtual cores are allocated in the first physical socket. If I set "NUMA Node Count" to 2 (without pinning vNUMA nodes to host NUMA nodes) I cannot start the VM since I get the error message: "The host xyz did not satisfy internal filter CpuPinning because it doesn't have enough CPUs for the dedicated CPU policy that the VM is set with.. ". Same thing happens if I set "NUMA Node Count" to 1. Am I doing something wrong ?

Hi, When scheduling the dedicated CPUs with NUMA, we calculate the available memory on the NUMA nodes. Maybe the hugepages are the problem here. Could you please try your setup without them? Thanks, Lucia On Thu, Dec 8, 2022 at 4:30 PM Gianluca Amato <gianluca.amato.74@gmail.com> wrote:
Hello, as stated in the subject, I am trying to combine the "dedicated" CPU Pinning Policy with vNUMA. My host has 2 sockets, 28 cores per socket and 2 threads per core. In each socket I allocated 128 hugepages of 1GB each. This is the output of "numactl --hardware":
------------ available: 2 nodes (0-1) node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108 110 node 0 size: 257333 MB node 0 free: 122518 MB node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109 111 node 1 size: 257993 MB node 1 free: 124972 MB node distances: node 0 1 0: 10 21 1: 21 10 -----------
I would like to dedicate one half of all the cores to a single guest. I created a VM with 2 sockets, 14 cores per socket and 2 threads per core, using the "dedicated" CPU Pinning Policy. If a set "NUMA Node Count" to zero, the VM starts and works flawlessy. However, all virtual cores are allocated in the first physical socket. If I set "NUMA Node Count" to 2 (without pinning vNUMA nodes to host NUMA nodes) I cannot start the VM since I get the error message: "The host xyz did not satisfy internal filter CpuPinning because it doesn't have enough CPUs for the dedicated CPU policy that the VM is set with.. ". Same thing happens if I set "NUMA Node Count" to 1.
Am I doing something wrong ? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IYZIABGATW56SF...

Hi Lucia, thanks for you suggestion. I made several experiments with and without hugepages. It seems that, when oVirt calculates the amount of available memory on the NUMA nodes, it does not keep into consideration the reserved hugepages. For example, this is the result of "numactl --hardware" on my host in this moment: ---------------- node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108 110 node 0 size: 257333 MB node 0 free: 106431 MB node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109 111 node 1 size: 257993 MB node 1 free: 142009 MB node distances: node 0 1 0: 10 21 1: 21 10 ------------------- I am able to launch my VM only if I set its Memory Size to be 210 GB or less (i.e., the double of the available RAM in node 0, which is the node with less available RAM). I tried several times with different amount of free memory, and this behavior seems to be consistent. Note that, once started, the VM consumes the pre-allocated reserved hugepages, and has almost no impact on the free memory. Do you think there is a reason why oVirt behaves in this ways, or is this a bug that I could signal in the github repo ? Thanks a lot again, --gianluca

Hi Gianluca, Thanks for investigating it. This is definitely a bug, please file an issue in GitHub for that. Lucia On Sat, Dec 10, 2022 at 10:30 PM Gianluca Amato <gianluca.amato.74@gmail.com> wrote:
Hi Lucia, thanks for you suggestion. I made several experiments with and without hugepages. It seems that, when oVirt calculates the amount of available memory on the NUMA nodes, it does not keep into consideration the reserved hugepages. For example, this is the result of "numactl --hardware" on my host in this moment:
---------------- node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108 110 node 0 size: 257333 MB node 0 free: 106431 MB node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109 111 node 1 size: 257993 MB node 1 free: 142009 MB node distances: node 0 1 0: 10 21 1: 21 10 -------------------
I am able to launch my VM only if I set its Memory Size to be 210 GB or less (i.e., the double of the available RAM in node 0, which is the node with less available RAM). I tried several times with different amount of free memory, and this behavior seems to be consistent. Note that, once started, the VM consumes the pre-allocated reserved hugepages, and has almost no impact on the free memory.
Do you think there is a reason why oVirt behaves in this ways, or is this a bug that I could signal in the github repo ?
Thanks a lot again, --gianluca
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BG25VHK4YIHSNJ...
participants (2)
-
Gianluca Amato
-
Lucia Jelinkova