Abnormal behavior on the reinstalled host

Hello everyone, I need help again with a host that I have reinstalled l and I don't understand why it doesn't work well. I have managed to get the host status to UP. But I can't migrate MV to this host. Yes, I can start a VM on this host and I can migrate it to another but it can't go back if you want. These are the errors I see: ID: 128 2024-03-19 20:34:56,809+01 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-77603) [6ea57120-4e64-4b1d-a31f- 1072f69c478b] EVENT_ID: VM_MIGRATION_TRYING_RERUN(128), Failed to migrate VM srvlinux to Host ovirt51.mydomain due to an Error: Fatal error during migration. Trying to migrate to another Host. It also happens that I cannot access the noVNC console on this Host but the VM is running. It gives me this error: "Something went wrong, connection is closed" If I put the host in maintenance mode it does not activate again and gives an error: Operation Canceled Error while executing action: ovirt51.mydomain: Cannot activate Host. Host has no unique id. Resolved by reinstalling the host and checking to activate it. But it doesn't seem normal to me. Please does anyone know how to resolve these issues? Thank you. OS Version: RHEL - 8.9 - 1.8.el8 OS Description: Rocky Linux 8.9 (Green Obsidian) Kernel Version: 4.18.0 - 513.18.1.el8_9.x86_64 KVM Version: 6.2.0 - 40.module+el8.9.0+1654+f4df84c4.2 LIBVIRT Version: libvirt-8.0.0-22.module+el8.9.0+1405+b6048078 VDSM Version: vdsm-4.50.6-3.git5d82b9e88.el8 SPICE Version: 0.14.3 - 4.el8 GlusterFS Version: glusterfs-10.5-1.el8s CEPH Version: librbd1-16.2.15-1.el8s Open vSwitch Version: openvswitch-2.15-4.el8 Nmstate Version: nmstate-1.4.5-2.el8_9 Kernel Features: MDS: (Not affected), L1TF: (Not affected), SRBDS: (Not affected), MELTDOWN: (Not affected), RETBLEED: (Not affected), SPECTRE_V1: (Mitigation: usercopy/swapgs barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Enhanced / Automatic IBRS, IBPB: conditional, RSB filling, PBRSB-eIBRS: SW sequence), ITLB_MULTIHIT: (Not affected), MMIO_STALE_DATA: (Mitigation: Clear CPU buffers; SMT vulnerable), TSX_ASYNC_ABORT: (Not affected), SPEC_STORE_BYPASS: (Mitigation: Speculative Store Bypass disabled via prctl), GATHER_DATA_SAMPLING: (Mitigation: Microcode), SPEC_RSTACK_OVERFLOW: (Not affected) VNC Encryption: Enabled OVN configured: No

Hello, it is possible that I have not explained it well. Does anyone know why I can't migrate VMs to the host I just reinstalled and added to my three server cluster? And why can't I open the noVNC remote console on this host if I boot the VM on it? Finally, why can't I activate the node after putting it in maintenance mode without having to reinstall it from the engine console? Thanks you so much for your helps.

sounds like a dependency is not fullfilled All the storages and required networks are configured on the added node?

Hi, unfortunately it is current behavior :-( you need to update all nodes to the same version and it will work, the same for me, one updated node didn't work for migration, bud stop and start ov VM worked. when I updated second node migration between new nodes was OK but between old and new not. when i updated all to the latest version, everything is OK. it is really strange and it started couple months ago. Jirka On 3/21/24 09:30, Ricardo OT wrote:
Hello, it is possible that I have not explained it well. Does anyone know why I can't migrate VMs to the host I just reinstalled and added to my three server cluster?
And why can't I open the noVNC remote console on this host if I boot the VM on it?
Finally, why can't I activate the node after putting it in maintenance mode without having to reinstall it from the engine console?
Thanks you so much for your helps. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3TR3Y4TIVI7RM...

Hello again, Thanks for your help. I have upgrade all host but not resolve my errors. - Can't direct migrate VM to reinstaled host. In no case can I migrate VMs from nodes 50 and 52 to the full reinstalled node 51. To move MV to this node 51 I have to power cycle it. Once VM is running I can move it to nodes 50 or 52 but it cannot return to this node 51. - Can't use NOVNC Console on node51. - To put the node in UP from Maintenance, I have to reinstall the host and deselect reboot. If I try to activate it directly it gives me this error: "Cannot activate Host. Host has no unique id." - Another error is that node 51 cannot host the hosted-engine even though I set it as engine as deploy. Does anyone have any idea why? thank you. Please I'm very frustrated. 3 nodes cluster hosted engine over glusterfs
participants (4)
-
Claus Serbe
-
Jirka Simon
-
Ricardo OT
-
ricardoot@gmail.com