Hey,
This only happens to Windows VMs mostly, when you update the cluster compatibility level it should alert you on VMs that might have this issues and let you know before you resume the upgrade.


On Fri, Apr 22, 2022 at 8:04 PM David Sekne <david.sekne@gmail.com> wrote:
Hello,

I followed the documentation on updating the compatibility version. We had no issues with NIC's, all VMs just needed a reboot. No other issues were displayed for VM's after the reboot (we have around 400 VMs) so not sure which changes we would need to do on VM's themselves? 

I will try to change the emulated machine for Windows VMs to see if it helps (other OS's have no issues).

Thank you for the help.

Regards,
David

On Fri, Apr 22, 2022 at 3:02 PM Erez Zarum <erezzarum@gmail.com> wrote:
Hey,
I recommend if you can't cope with it (i.e: logging to console and reconfiguring the NICs/Disks) it to change the VM custom emulated machine to "pc-q35-rhel8.1.0", this is the default emulated machine when cluster compatibility level is set to 4.4.
Also note, if you heavily rely on VNC, there's a bug introduced in libvirt 8 that won't allow to set password length longer than 8 chars, oVirt by default tries to set a password length of 12 chars but up until now libvirt simply ignored that and set the password based on the first 8 chars, i don't know from which version that happened but in 4.4.10 you won't be able to use VNC so i recommend to use SPICE (which is actually much better).



On Fri, Apr 22, 2022 at 2:57 PM David Sekne <david.sekne@gmail.com> wrote:
Hello,

I have noticed that some of our Windows VM's (2012 - 2022) randomly won't boot when reboot is initiated from guest OS. As far as I can tell this started happening after we raised the cluster compatibility from 4.4 to 4.5 (it's 4.6 now). To fix it a VM needs to be stopped and started.

We are running oVirt 4.4.10. 

I cannot really see much from the logs if grepping for a specific VM that had these issues.

Example VM ID is: daf33e97-a76f-4b82-b4f2-20fa4891c88b

Im attaching logs:

- Initial hypervisor where VM was running on (reboot is initiated at 4:06:38 AM): vdsm-1.log
- Second hypervisor where VM was started after it was stopped and started back (this was done 7:45:43 AM): vdsm-2.log
- Engine log: engine.log

Has someone noticed any similar issues and can provide some feedback / help?

Regards,
David
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GGAVCU7PPGITPAPZVABJEROFSF3CKXUD/