Hello,
I have a cluster composed of 4 hosts, with 2 hosts in site A and 2 hosts in
site B.
Version of engine and hosts is latest 4.4.8-6.
Site A is the primary site and its hosts have SPM priority high, while site
B hosts have SPM priority low.
For critical VMs I create a cluster affinity group so that they preferably
run on hosts in site A.
If I migrate a VM from one host in site A to one host in site B, the
migration completes but suddenly, after a few seconds (ranging from 10 to
30) the VM comes back again (live migrates) to one host of the site A pool.
Two considerations:
. when the VM comes back to site A and I'm connected to the web admin gui I
see in bottom right the pop-up message regarding the balancing operation:
https://drive.google.com/file/d/1lfm0AVwYKyyRL1qHh94AySpr3XAtV7lO/view?us...
But then if I go in the VM, or cluster, or general events pane I don't see
any direct feedback regarding this balancing that took place.
I only see the VM migration events:
Oct 1, 2021, 2:47:01 PM Migration completed (VM: impoldsrvdbpbi, Source:
xxxx, Destination: yyyy, Duration: 15 seconds, Total: 27 seconds, Actual
downtime: 67ms)
Oct 1, 2021, 2:46:34 PM Migration initiated by system (VM: impoldsrvdbpbi,
Source: xxxx, Destination: yyyy, Reason: Affinity rules enforcement).
Oct 1, 2021, 2:45:45 PM Migration completed (VM: impoldsrvdbpbi, Source:
yyyy, Destination: xxxx, Duration: 2 seconds, Total: 14 seconds, Actual
downtime: (N/A))
Oct 1, 2021, 2:45:30 PM Migration started (VM: impoldsrvdbpbi, Source:
yyyy, Destination: xxxx, User: gian@internal).
That indeed contain some information (Reason: Affinity rules enforcement)
but only in the VM migration line.
Could it be useful to add an independent line regarding the balancing
trigger that implies then a migration?
. In this case could it be useful to give the user a warning that the VM
will be suddenly migrated back so that he/she can think about it before
having at the end two migrations with a final stage that is the starting
point itself...?
If I leave only one host in site A and put it into maintenance, the VMs are
correctly migrated to hosts in site B and even when the host in site A
comes back available, the coming back operation is not triggered. Is this
something expected or should the live migrate to hosts in site A?
Thanks,
Gianluca