
Source: CentOS 7.2 - qemu-kvm-ev-2.3.0-31.el7.16.1 Dest: CentOS 7.3 - qemu-kvm-ev-2.6.0-28.el7_3.3.1 To be fair I'm trying to migrate away that VM so I can install updates on the source host. 2017-03-24 15:18 GMT+01:00 Michal Skrivanek <michal.skrivanek@redhat.com>:
On 24 Mar 2017, at 15:15, Davide Ferrari <davide@billymob.com> wrote:
Mmmh this is all I got from libvirt log on receiver host:
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name guest=druid-co01,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/ qemu/domain-5-druid-co01/master-key.aes -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m size=16777216k,slots=16,maxmem=4294967296k -realtime mlock=off -smp 4,maxcpus=64,sockets=16,cores=4,threads=1 -numa node,nodeid=0,cpus=0-3,mem=16384 -uuid 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8 -smbios 'type=1,manufacturer=oVirt,product=oVirt Node,version=7-2.1511.el7.centos.2.10,serial=4C4C4544- 0037-4C10-8031-B7C04F564232,uuid=4f627cc1-9b52-4eef-bf3a-c02e8a6303b8' -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/ var/lib/libvirt/qemu/domain-5-druid-co01.billydoma/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2017-03-24T10:38:03,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x7 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/00000001-0001-0001-0001-0000000003e3/ba2bd397-9222- 424d-aecc-eb652c0169d9/images/08b19faa-4b1f-4da8-87a2- 2af0700a7906/bdb18a7d-1558-41f9-aa3a-e63407c7881e,format= qcow2,if=none,id=drive-virtio-disk0,serial=08b19faa-4b1f- 4da8-87a2-2af0700a7906,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive- virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/rhev/data-center/ 00000001-0001-0001-0001-0000000003e3/ba2bd397-9222- 424d-aecc-eb652c0169d9/images/987d84da-188b-45c0-99d0- 3dde29ddcb6e/51a1b9ee-b0ae-4208-9806-a319d34db06e,format= qcow2,if=none,id=drive-virtio-disk1,serial=987d84da-188b- 45c0-99d0-3dde29ddcb6e,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:93,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/ 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev= charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/ 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev= charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio- serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5904,addr=192.168.10.107,x509-dir=/etc/pki/vdsm/ libvirt-spice,tls-channel=default,tls-channel=main,tls- channel=display,tls-channel=inputs,tls-channel=cursor,tls- channel=playback,tls-channel=record,tls-channel=smartcard, tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608, vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on Domain id=5 is tainted: hook-script 2017-03-24T10:38:03.414396Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 2017-03-24T10:38:03.414497Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config 2017-03-24 10:41:20.982+0000: shutting down 2017-03-24T10:41:20.986633Z qemu-kvm: load of migration failed: Input/output error
Donating host doesn't say a thing about this VM. There's an "input/output error" but I can't see to what is related…
most likely to the migration stream, either the TCP connection was cut short or internal bug What are the version of qemu on both ends? host OS?
Thanks, michal
2017-03-24 13:38 GMT+01:00 Francesco Romani <fromani@redhat.com>:
On 03/24/2017 11:58 AM, Davide Ferrari wrote:
And this is the vdsm log from vmhost04:
Thread-6320717::INFO::2017-03-24 11:41:13,019::migration::712::virt.vm::(monitor_migration) vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 190 seconds elapsed, 98% of data processed, total data: 16456MB, processed data: 9842MB, remaining data: 386MB, transfer speed 52MBps, zero pages: 1718676MB, compressed: 0MB, dirty rate: -1, memory iteration: -1 libvirtEventLoop::DEBUG::2017-03-24 11:41:21,007::vm::4291::virt.vm::(onLibvirtLifecycleEvent) vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::event Suspended detail 0 opaque None libvirtEventLoop::INFO::2017-03-24 11:41:21,025::vm::4815::virt.vm::(_logGuestCpuStatus) vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::CPU stopped: onSuspend libvirtEventLoop::DEBUG::2017-03-24 11:41:21,069::vm::4291::virt.vm::(onLibvirtLifecycleEvent) vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::event Resumed detail 0 opaque None libvirtEventLoop::INFO::2017-03-24 11:41:21,069::vm::4815::virt.vm::(_logGuestCpuStatus) vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::CPU running: onResume Thread-6320715::DEBUG::2017-03-24 11:41:21,224::migration::715::virt.vm::(stop) vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::stopping migration monitor thread Thread-6320715::ERROR::2017-03-24 11:41:21,225::migration::252::virt.vm::(_recover) vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::operation failed: migration job: unexpectedly failed
This is surprising (no pun intended) With a pretty high chance this comes from libvirt, I'm afraid you need to dig in the libvirt logs/journal entries to learn more. Vdsm could unfortunately do better than what it is already doing :\
-- Francesco Romani Senior SW Eng., Virtualization R&D Red Hat IRC: fromani github: @fromanirh
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Davide Ferrari Senior Systems Engineer _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Davide Ferrari Senior Systems Engineer