Hello,
as stated in the subject, I am trying to combine the "dedicated" CPU Pinning
Policy with vNUMA. My host has 2 sockets, 28 cores per socket and 2 threads per core. In
each socket I allocated 128 hugepages of 1GB each. This is the output of "numactl
--hardware":
------------
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52
54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108
110
node 0 size: 257333 MB
node 0 free: 122518 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53
55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109
111
node 1 size: 257993 MB
node 1 free: 124972 MB
node distances:
node 0 1
0: 10 21
1: 21 10
-----------
I would like to dedicate one half of all the cores to a single guest. I created a VM with
2 sockets, 14 cores per socket and 2 threads per core, using the "dedicated" CPU
Pinning Policy. If a set "NUMA Node Count" to zero, the VM starts and works
flawlessy. However, all virtual cores are allocated in the first physical socket. If I set
"NUMA Node Count" to 2 (without pinning vNUMA nodes to host NUMA nodes) I
cannot start the VM since I get the error message: "The host xyz did not satisfy
internal filter CpuPinning because it doesn't have enough CPUs for the dedicated CPU
policy that the VM is set with.. ". Same thing happens if I set "NUMA Node
Count" to 1.
Am I doing something wrong ?