Hi all,
I believe that introduction of bug 1413150 (Add warning to change CL to the match the
installed engine version) may have an unfortunate consequence of people actually moving
forward with the CL and DC without realizing the constraints on running existing VMs. The
periodic nagging is likely going to make people run into the following issue even more
frequently
We have a cluster level override per VM which takes care of compatibility on CL update by
setting the VM’s override to the original CL - that is visible in VM properties, but
that’s pretty much it, it’s not very prominent at the moment and it can’t be searched on
(bug 1454389). When the update cluster change is made there is a dialog informing you, and
there’s also the pending config change for those running VMs…until you shut the VM down,
from that time on it only has the CL override set.
But the real problem is with DC which AFAIK does not have an override capability, and
currently does not have any checks for running VMs. With the above mechanism you can
easily get a VM with CL override (say. 3.6) and mindlessly updated DC to 4.1…and once you
stop such VM you won’t be able to start it anymore as there is a proper check for
unsupported 3.6 CL VM in a newer DC (as implemented by bug 1436577 - Solve DC/Cluster
upgrade of VMs with now-unsupported custom compatibility level)
We either need to warn/block on DC upgrade, or implement some kind of a DC override (I
guess this is a storage question?)
Thoughts/ideas?
Thanks,
michal