On Sat, Mar 4, 2017 at 8:42 PM, Bill James <bill.james@j2.com> wrote:
I have a hardware node that for whatever reason most of the VM status was "?", even though VM was up and running fine.

This indicates that the engine failed to monitor the host (thus switching VMs it knew that were running on it to the 'unknown' state).
 
node was running ovirt 3.6 so I decided to upgrade it to 4.1.0.4-1 like most of the rest of the cluster is, including engine.
After upgrade some VMs I'm able to start up fine on the upgraded hosts, other fail to start on that host and migrate themselves to another host.
If I try to manually migrate them to the recently upgraded host migration fails. I have yet to find in the logs where it says why they fail.
I am able to "runonce" the VM and tell it to start on this host and it starts fine.
Why is migration failing? 

ovirt-engine-4.1.0.4-1.el7.centos.noarch
vdsm-4.19.4-1.el7.centos.x86_64


2017-03-04 10:16:51,716-08 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-48) [519095c3-83ed-4478-8dd4-f432db6db140] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Correlation ID: 2
6d1ecba-8711-408d-b9d5-6461a4aab4e5, Job ID: 546e0e11-9b4b-48d4-bd50-2f525d049ae2, Call Stack: null, Custom Event ID: -1, Message: Migration failed  (VM: j2es2.test.j2noc.com, Source: ovirt6.test.j2noc.com, Destination: ovirt4.test.j2noc.com)


(engine.log attached)

The engine log doesn't contain the reason.
Please file a bug (https://bugzilla.redhat.com) and attach VDSM logs from the mentioned source and destination hosts that cover the time of the migration attempt.
 

Thanks.


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users