guests crashing during live migration(NUMA config issue)

Hello, during updates of our physical nodes running ovirt 4.2.3 I had to live- migrate all VMs to evacuate them from the hosts. This caused roughly 10% of guests to end up crashed /shutdown after live migration. Errors in the Logs are: 2018-05-31T15:15:51.273805Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future 2018-05-31T15:16:54.596554Z qemu-kvm: Unknown combination of migration flags: 0 2018-05-31T15:16:54.597196Z qemu-kvm: error while loading state section id 3(ram) 2018-05-31T15:16:54.598491Z qemu-kvm: load of migration failed: Invalid argument 2018-05-31 15:16:55.010+0000: shutting down, reason=crashed is there anything I can do about this? The hardware of all nodes is 100% identical -- Andreas Balg

Hello, I ran into a similar error. After updating the management to oVirt Manager: CentOS 7.5 ovirt: 4.2.3.8-1.el7 I tried to put the 1st host into maintenance mode to do the update. Since ~ 4h, the server tries to migrate all of the VMs without success. On the 'reception' host /var/log/libvirt/qemu shows following error: 2018-06-07T12:57:54.420311Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 2018-06-07T12:57:54.420483Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config 2018-06-07 13:00:03.692+0000: shutting down, reason=failed physical Host: OS-Version:RHEL - 7 - 4.1708.el7.centos Kernelversion: 3.10.0 - 693.21.1.el7.x86_64 KVM-Version: 2.9.0 - 16.el7_4.14.1 LIBVIRT-Version: libvirt-3.2.0-14.el7_4.9 VDSM-Version:vdsm-4.19.45-1.el7.centos Kernel Features:PTI: 1, IBPB: 0, IBRS: 0 Hans-Joachim

On 7 Jun 2018, at 15:52, rni@chef.net wrote:
Hello,
I ran into a similar error. After updating the management to oVirt Manager: CentOS 7.5 ovirt: 4.2.3.8-1.el7 I tried to put the 1st host into maintenance mode to do the update. Since ~ 4h, the server tries to migrate all of the VMs without success. On the 'reception' host /var/log/libvirt/qemu shows following error: 2018-06-07T12:57:54.420311Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 2018-06-07T12:57:54.420483Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config
these are not errors, just warns, unrelated
2018-06-07 13:00:03.692+0000: shutting down, reason=failed
what does it say on the source host?
physical Host: OS-Version:RHEL - 7 - 4.1708.el7.centos Kernelversion: 3.10.0 - 693.21.1.el7.x86_64 KVM-Version: 2.9.0 - 16.el7_4.14.1 LIBVIRT-Version: libvirt-3.2.0-14.el7_4.9 VDSM-Version:vdsm-4.19.45-1.el7.centos
both hosts are exactly the same? When was the original VM started, did you make any upgrades while it was running, perhaps a long time ago? Thanks, michal
Kernel Features:PTI: 1, IBPB: 0, IBRS: 0
Hans-Joachim _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GT7WNZNMWV5SBR...

On 7 Jun 2018, at 11:27, Balg, Andreas <Andreas.Balg@haufe.com> wrote:
Hello,
during updates of our physical nodes running ovirt 4.2.3 I had to live- migrate all VMs to evacuate them from the hosts. This caused roughly 10% of guests to end up crashed /shutdown after live migration.
Errors in the Logs are:
2018-05-31T15:15:51.273805Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future 2018-05-31T15:16:54.596554Z qemu-kvm: Unknown combination of migration flags: 0 2018-05-31T15:16:54.597196Z qemu-kvm: error while loading state section id 3(ram) 2018-05-31T15:16:54.598491Z qemu-kvm: load of migration failed: Invalid argument 2018-05-31 15:16:55.010+0000: shutting down, reason=crashed
this is typically due to using some unsupported version. Can you include the whole qemu log and versions from both source and destination hypervisors, and verify there were no updates run while that VM was running? Thanks, michal
is there anything I can do about this? The hardware of all nodes is 100% identical
-- Andreas Balg _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MCC47Y52B2NUB2...

Sorry Michal, this information is already gone - the versions on both hosts should have been identical - because thats why the machines migrated - I was running "update" from the ovengine UI on those hosts, But of course towards the end for the final host to be updated all the others (target) were already running the updated version. But of course this cannot be done in another way I guess ... Am Donnerstag, den 07.06.2018, 20:00 +0200 schrieb Michal Skrivanek:
On 7 Jun 2018, at 11:27, Balg, Andreas <Andreas.Balg@haufe.com> wrote:
Hello,
during updates of our physical nodes running ovirt 4.2.3 I had to live- migrate all VMs to evacuate them from the hosts. This caused roughly 10% of guests to end up crashed /shutdown after live migration.
Errors in the Logs are:
2018-05-31T15:15:51.273805Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future 2018-05-31T15:16:54.596554Z qemu-kvm: Unknown combination of migration flags: 0 2018-05-31T15:16:54.597196Z qemu-kvm: error while loading state section id 3(ram) 2018-05-31T15:16:54.598491Z qemu-kvm: load of migration failed: Invalid argument 2018-05-31 15:16:55.010+0000: shutting down, reason=crashed
this is typically due to using some unsupported version. Can you include the whole qemu log and versions from both source and destination hypervisors, and verify there were no updates run while that VM was running?
Thanks, michal
is there anything I can do about this? The hardware of all nodes is 100% identical
-- Andreas Balg _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/commun ity-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.or g/message/MCC47Y52B2NUB2GDTHDLCWUMR2FH2Q2C/
-- Andreas Balg Senior Site Reliability Engineer DevOps ------------------------------------------------------------------ Haufe-umantis AG Ein Unternehmen der Haufe Gruppe Unterstrasse 11, CH-9001 St.Gallen Tel. +41 71 224 01 52, Fax +41 71 224 01 02 E-Mail: andreas.balg@haufe.com

Hello, sorry for my late answer.... I've been off for a long weekend. :-) So source: OS-Version:RHEL - 7 - 4.1708.el7.centos Kernelversion:3.10.0 - 693.21.1.el7.x86_64 KVM-Version:2.9.0 - 16.el7_4.14.1 LIBVIRT-Version:libvirt-3.2.0-14.el7_4.9 VDSM-Version:vdsm-4.19.45-1.el7.centos SPICE-Version:0.12.8 - 2.el7.1 CEPH-Version:librbd1-0.94.5-2.el7 Kernel Features:PTI: 1, IBPB: 0, IBRS: 0 Destination: OS-Version:RHEL - 7 - 4.1708.el7.centos Kernelversion:3.10.0 - 693.21.1.el7.x86_64 KVM-Version:2.9.0 - 16.el7_4.14.1 LIBVIRT-Version:libvirt-3.2.0-14.el7_4.9 VDSM-Version:vdsm-4.19.45-1.el7.centos SPICE-Version:0.12.8 - 2.el7.1 CEPH-Version:librbd1-0.94.5-2.el7 Kernel Features:PTI: 1, IBPB: 0, IBRS: 0 qemu.log on destination: 2018-06-11T09:13:20.613605Z qemu-kvm: terminating on signal 15 from pid 3008 (/usr/sbin/libvirtd) 2018-06-11 09:21:56.071+0000: starting up libvirt version: 3.2.0, package: 14.el7_4.9 (CentOS BuildSystem <http://bugs.centos.org>, 2018-03-07-13:51:24, x86-01.bsys.centos.org), qemu version: 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.14.1), hostname: abc.yyyy.xxxxx.com LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name guest=bbgas102,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1932-abcas102/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Westmere,vme=on,pclmuldq=on,x2apic=on,hypervisor=on,arat=on -m size=2097152k,slots=16,maxmem=4294967296k -realtime mlock=off -smp 4,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0-3,mem=2048 -uuid 4aff4193-ba75-481d-92b3-59b62cd8b111 -smbios 'type=1,manufacturer=oVirt,product=oVirt Node,version=7-4.1708.el7.centos,serial=32393735-3933-5A43-4A32-34333046564B,uuid=4aff4193-ba75-481d-92b3-59b62cd8b111' -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1932-bbgas102/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2018-06-11T09:21:55,driftfix=slew -global kvm-pit.lost_tick_p olicy=delay -no-hpet -no-shutdown -boot menu=on,splash-time=10000,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/68b8aaff-770d-4a08-800b-0c15e94efaa8/images/33235bbf-0156-421e-9391-0749247b6ba6/0d2949b6-af0f-4de2-b29a-10dcb39ad857,format=raw,if=none,id=drive-virtio-disk0,serial=33235bbf-0156-421e-9391-0749247b6ba6,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x9,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/68b8aaff-770d-4a08-800b-0c15e94efaa8/images/5149067a-b18c-41cf-a355-033317291148/f0afa250-6704-410c-b9be-60b99cb28ce9,format=raw,if=none,id=dri ve-virtio-disk1,serial=5149067a-b18c-41cf-a355-033317291148,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=37,id=hostnet0,vhost=on,vhostfd=40 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bd:ed:0a,bus=pci.0,addr=0x7 -netdev tap,fd=42,id=hostnet1,vhost=on,vhostfd=43 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:1a:4a:bd:ed:26,bus=pci.0,addr=0x8 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4aff4193-ba75-481d-92b3-59b62cd8b111.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4aff4193-ba75-481d-92b3-59b62cd8b111.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id =input0,bus=usb.0,port=1 -vnc 10.157.8.40:3,password -k de -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on 2018-06-11T09:21:56.171158Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 4 5 6 7 8 9 10 11 12 13 14 15 2018-06-11T09:21:56.171319Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config qemu.log on source: 2018-06-11 09:03:32.701+0000: initiating migration 2018-06-11 09:10:39.758+0000: initiating migration 2018-06-11 09:13:04.323+0000: initiating migration 2018-06-11 09:18:32.877+0000: initiating migration 2018-06-11 09:21:56.308+0000: initiating migration vdsm.log on source: 2018-06-11 11:21:54,331+0200 ERROR (migsrc/9d52cd9b) [virt.vm] (vmId='9d52cd9b-919d-40ff-8036-5f94f6b02019') Operation abgebrochen: Migrations-Job: abgebrochen durch Client (migration:287) 2018-06-11 11:21:54,517+0200 ERROR (migsrc/9d52cd9b) [virt.vm] (vmId='9d52cd9b-919d-40ff-8036-5f94f6b02019') Failed to migrate (migration:429) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in run self._startUnderlyingMigration(time.time()) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in _startUnderlyingMigration self._perform_with_downtime_thread(duri, muri) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 556, in _perform_with_downtime_thread self._perform_migration(duri, muri) File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 529, in _perform_migration self._vm._dom.migrateToURI3(duri, params, flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1006, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1679, in migrateToURI3 if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: Operation abgebrochen: Migrations-Job: abgebrochen durch Client vsdm.log on destination: 2018-06-11 11:21:57,317+0200 INFO (vm/a54af7cd) [vdsm.api] FINISH prepareImage return={'info': {'path': u'/rhev/data-center/mnt/blockSD/68b8aaff-770d-4a08-800b-0c15e94efaa8/images/1fe4d656-8f59-4221-a859- 2018-06-11 11:21:57,318+0200 INFO (vm/a54af7cd) [vds] prepared volume path: /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/68b8aaff-770d-4a08-800b-0c15e94efaa8/images/1fe4d656-8f59-4221-a859-2f7bc 2018-06-11 11:21:57,361+0200 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call VM.migrationCreate succeeded in 0.34 seconds (__init__:539) 2018-06-11 11:21:58,847+0200 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:539) 2018-06-11 11:21:58,856+0200 ERROR (jsonrpc/4) [jsonrpc.JsonRpcServer] Internal server error (__init__:577) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 202, in _dynamicMethod result = fn(*methodArgs) File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies io_tune_policies_dict = self._cif.getAllVmIoTunePolicies() File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies 'current_values': v.getIoTune()} File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune result = self.getIoTuneResponse() File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse res = self._dom.blockIoTune( File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, in __getattr__ % self.vmid) NotConnectedError: VM u'4aff4193-ba75-481d-92b3-59b62cd8b111' was not started yet or was shut down 2018-06-11 11:21:58,857+0200 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmIoTunePolicies failed (error -32603) in 0.00 seconds (__init__:539) 2018-06-11 11:22:03,161+0200 INFO (jsonrpc/2) [vdsm.api] START repoStats(options=None) from=::ffff:10.157.8.36,57852, task_id=24fa3f9e-9110-4da9-a41f-d385036d6fef (api:46) 2018-06-11 11:22:03,162+0200 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats return={u'48055e27-f1ca-466a-8a2c-e191c34f0226': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000301869 2018-06-11 11:22:03,180+0200 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.02 seconds (__init__:539) 2018-06-11 11:22:06,251+0200 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.01 seconds (__init__:539) 2018-06-11 11:22:09,028+0200 INFO (periodic/3177) [vdsm.api] START repoStats(options=None) from=internal, task_id=5bae7d6e-588d-4098-b4b9-48ead80060eb (api:46) Normally, there shouldn't be any modifications. Thank you for your help

Hello, my server is still trying to migrate the VM. Maybe the log of the engine can help a little to solve this issue. I wonder about the Keystore was tampered with, or password was incorrect ERROR. 2018-06-14 13:13:30,300+02 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-246968) [] EVENT_ID: VM_MIGRATION_START_SYSTEM_INITIATED(67), Migration initiated by system (VM: vm_to_migrate, Source: SOURCE, Destination: DESTINATION, Reason: Host preparing for maintenance). 2018-06-14 13:13:32,886+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] Fetched 8 VMs from VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498' 2018-06-14 13:13:32,887+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly detected as 'MigratingTo' on VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a') 2018-06-14 13:13:32,887+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the refresh until migration is done 2018-06-14 13:13:47,939+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-42) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly detected as 'MigratingTo' on VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a') 2018-06-14 13:13:47,939+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-42) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the refresh until migration is done 2018-06-14 13:14:02,993+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly detected as 'MigratingTo' on VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a') 2018-06-14 13:14:02,993+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-13) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the refresh until migration is done 2018-06-14 13:14:18,042+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly detected as 'MigratingTo' on VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a') 2018-06-14 13:14:18,042+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-67) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the refresh until migration is done 2018-06-14 13:14:33,090+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly detected as 'MigratingTo' on VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a') 2018-06-14 13:14:33,090+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-85) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the refresh until migration is done 2018-06-14 13:14:38,356+02 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [6d991f10] Lock Acquired to object 'EngineLock:{exclusiveLocks='[d1457e7e-f29d-49ac-b635-2457a7cbc59f=PROVIDER]', sharedLocks=''}' 2018-06-14 13:14:38,369+02 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [6d991f10] Running command: SyncNetworkProviderCommand internal: true. 2018-06-14 13:14:38,370+02 WARN [org.ovirt.engine.core.bll.provider.network.openstack.CustomizedRESTEasyConnector] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [6d991f10] Cannot register external providers trust store: java.io.IOException: Keystore was tampered with, or password was incorrect 2018-06-14 13:14:38,379+02 ERROR [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [6d991f10] Command 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand' failed: EngineException: (Failed with error unable to find valid certification path to requested target and code 5050) 2018-06-14 13:14:38,382+02 INFO [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [6d991f10] Lock freed to object 'EngineLock:{exclusiveLocks='[d1457e7e-f29d-49ac-b635-2457a7cbc59f=PROVIDER]', sharedLocks=''}' 2018-06-14 13:14:48,139+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-92) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly detected as 'MigratingTo' on VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a') 2018-06-14 13:14:48,139+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-92) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the refresh until migration is done 2018-06-14 13:15:03,193+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly detected as 'MigratingTo' on VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a') 2018-06-14 13:15:03,193+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the refresh until migration is done 2018-06-14 13:15:18,242+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly detected as 'MigratingTo' on VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a') 2018-06-14 13:15:18,242+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-63) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the refresh until migration is done 2018-06-14 13:15:33,293+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-17) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) was unexpectedly detected as 'MigratingTo' on VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) (expected on 'ea7bae8f-d966-4baa-b8b4-a5522b88ec3a') 2018-06-14 13:15:33,293+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-17) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111' is migrating to VDS 'f3749f1e-a68d-424f-aa3a-f0ec996df498'(DESTINATION) ignoring it in the refresh until migration is done 2018-06-14 13:15:45,377+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [] VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) moved from 'MigratingFrom' --> 'Up' 2018-06-14 13:15:45,377+02 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [] Adding VM '4aff4193-ba75-481d-92b3-59b62cd8b111'(vm_to_migrate) to re-run list 2018-06-14 13:15:45,400+02 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-10) [] Rerun VM '4aff4193-ba75-481d-92b3-59b62cd8b111'. Called from VDS 'SOURCE' 2018-06-14 13:15:45,451+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-247023) [] START, MigrateStatusVDSCommand(HostName = SOURCE, MigrateStatusVDSCommandParameters:{hostId='ea7bae8f-d966-4baa-b8b4-a5522b88ec3a', vmId='4aff4193-ba75-481d-92b3-59b62cd8b111'}), log id: 25f95334 2018-06-14 13:15:45,453+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-247023) [] FINISH, MigrateStatusVDSCommand, log id: 25f95334 2018-06-14 13:15:45,478+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-247023) [] EVENT_ID: VM_MIGRATION_TRYING_RERUN(128), Failed to migrate VM vm_to_migrate to Host DESTINATION . Trying to migrate to another Host. 2018-06-14 13:15:45,557+02 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] (EE-ManagedThreadFactory-engine-Thread-247023) [] Running command: MigrateVmCommand internal: true. Entities affected : ID: 4aff4193-ba75-481d-92b3-59b62cd8b111 Type: VMAction group MIGRATE_VM with role type USER 2018-06-14 13:15:45,595+02 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-247023) [] START, MigrateVDSCommand( MigrateVDSCommandParameters:{hostId='ea7bae8f-d966-4baa-b8b4-a5522b88ec3a', vmId='4aff4193-ba75-481d-92b3-59b62cd8b111', srcHost='10.157.8.37', dstVdsId='69ebacd7-ee55-4c52-abf1-437a14d5fb0d', dstHost='10.157.8.42:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='false', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='false', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='null', dstQemu='192.168.1.113'}), log id: 736d023f 2018-06-14 13:15:45,595+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-247023) [] START, MigrateBrokerVDSCommand(HostName = SOURCE, MigrateVDSCommandParameters:{hostId='ea7bae8f-d966-4baa-b8b4-a5522b88ec3a', vmId='4aff4193-ba75-481d-92b3-59b62cd8b111', srcHost='10.157.8.37', dstVdsId='69ebacd7-ee55-4c52-abf1-437a14d5fb0d', dstHost='10.157.8.42:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='false', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='false', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='null', dstQemu='192.168.1.113'}), log id: 58572c5b 2018-06-14 13:15:45,601+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-247023) [] FINISH, MigrateBrokerVDSCommand, log id: 58572c5b 2018-06-14 13:15:45,604+02 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-247023) [] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 736d023f
participants (3)
-
Balg, Andreas
-
Michal Skrivanek
-
rni@chef.net