<div dir="ltr"><div><div>Source: CentOS 7.2 - qemu-kvm-ev-2.3.0-31.el7.16.1<br></div>Dest: CentOS 7.3 - qemu-kvm-ev-2.6.0-28.el7_3.3.1<br><br></div>To be fair I&#39;m trying to migrate away that VM so I can install updates on the source host.<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">2017-03-24 15:18 GMT+01:00 Michal Skrivanek <span dir="ltr">&lt;<a href="mailto:michal.skrivanek@redhat.com" target="_blank">michal.skrivanek@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><br><div><blockquote type="cite"><span class=""><div>On 24 Mar 2017, at 15:15, Davide Ferrari &lt;<a href="mailto:davide@billymob.com" target="_blank">davide@billymob.com</a>&gt; wrote:</div><br class="m_-1913954970336221303Apple-interchange-newline"></span><div><div dir="ltr"><span class=""><div><div>Mmmh this is all I got from libvirt log on receiver host:<br><br>LC_ALL=C PATH=/usr/local/sbin:/usr/<wbr>local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name guest=druid-co01,debug-<wbr>threads=on -S -object secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-5-druid-co01/<wbr>master-key.aes -machine pc-i440fx-rhel7.2.0,accel=kvm,<wbr>usb=off -cpu Haswell-noTSX -m size=16777216k,slots=16,<wbr>maxmem=4294967296k -realtime mlock=off -smp 4,maxcpus=64,sockets=16,cores=<wbr>4,threads=1 -numa node,nodeid=0,cpus=0-3,mem=<wbr>16384 -uuid 4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8 -smbios &#39;type=1,manufacturer=oVirt,<wbr>product=oVirt Node,version=7-2.1511.el7.<wbr>centos.2.10,serial=4C4C4544-<wbr>0037-4C10-8031-B7C04F564232,<wbr>uuid=4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8&#39; -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-5-<wbr>druid-co01.billydoma/monitor.<wbr>sock,server,nowait -mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc base=2017-03-24T10:38:03,<wbr>driftfix=slew -global kvm-pit.lost_tick_policy=<wbr>discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=<wbr>pci.0,addr=0x7 -device virtio-serial-pci,id=virtio-<wbr>serial0,max_ports=16,bus=pci.<wbr>0,addr=0x4 -drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/<wbr>00000001-0001-0001-0001-<wbr>0000000003e3/ba2bd397-9222-<wbr>424d-aecc-eb652c0169d9/images/<wbr>08b19faa-4b1f-4da8-87a2-<wbr>2af0700a7906/bdb18a7d-1558-<wbr>41f9-aa3a-e63407c7881e,format=<wbr>qcow2,if=none,id=drive-virtio-<wbr>disk0,serial=08b19faa-4b1f-<wbr>4da8-87a2-2af0700a7906,cache=<wbr>none,werror=stop,rerror=stop,<wbr>aio=threads -device virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x5,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1 -drive file=/rhev/data-center/<wbr>00000001-0001-0001-0001-<wbr>0000000003e3/ba2bd397-9222-<wbr>424d-aecc-eb652c0169d9/images/<wbr>987d84da-188b-45c0-99d0-<wbr>3dde29ddcb6e/51a1b9ee-b0ae-<wbr>4208-9806-a319d34db06e,format=<wbr>qcow2,if=none,id=drive-virtio-<wbr>disk1,serial=987d84da-188b-<wbr>45c0-99d0-3dde29ddcb6e,cache=<wbr>none,werror=stop,rerror=stop,<wbr>aio=threads -device virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x8,drive=drive-<wbr>virtio-disk1,id=virtio-disk1 -netdev tap,fd=34,id=hostnet0,vhost=<wbr>on,vhostfd=35 -device virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:1a:4a:<wbr>16:01:93,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8.com.redhat.rhevm.<wbr>vdsm,server,nowait -device virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8.org.qemu.guest_<wbr>agent.0,server,nowait -device virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=<wbr>vdagent -device virtserialport,bus=virtio-<wbr>serial0.0,nr=3,chardev=<wbr>charchannel2,id=channel2,name=<wbr>com.redhat.spice.0 -spice tls-port=5904,addr=192.168.10.<wbr>107,x509-dir=/etc/pki/vdsm/<wbr>libvirt-spice,tls-channel=<wbr>default,tls-channel=main,tls-<wbr>channel=display,tls-channel=<wbr>inputs,tls-channel=cursor,tls-<wbr>channel=playback,tls-channel=<wbr>record,tls-channel=smartcard,<wbr>tls-channel=usbredir,seamless-<wbr>migration=on -device qxl-vga,id=video0,ram_size=<wbr>67108864,vram_size=8388608,<wbr>vram64_size_mb=0,vgamem_mb=16,<wbr>bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=<wbr>balloon0,bus=pci.0,addr=0x6 -msg timestamp=on<br>Domain id=5 is tainted: hook-script<br>2017-03-24T10:38:03.414396Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63<br>2017-03-24T10:38:03.414497Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config<br>2017-03-24 10:41:20.982+0000: shutting down<br>2017-03-24T10:41:20.986633Z qemu-kvm: load of migration failed: Input/output error<br><br></div>Donating host doesn&#39;t say a thing about this VM.<br></div></span>There&#39;s an &quot;input/output error&quot; but I can&#39;t see to what is related…</div></div></blockquote><div><br></div>most likely to the migration stream, either the TCP connection was cut short or internal bug</div><div>What are the version of qemu on both ends? host OS?</div><div><br></div><div>Thanks,</div><div>michal</div><div><div class="h5"><div><br><blockquote type="cite"><div><div dir="ltr"><br><br><div><div><br><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-03-24 13:38 GMT+01:00 Francesco Romani <span dir="ltr">&lt;<a href="mailto:fromani@redhat.com" target="_blank">fromani@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
On 03/24/2017 11:58 AM, Davide Ferrari wrote:<br>
&gt; And this is the vdsm log from vmhost04:<br>
&gt;<br>
</span><div><div class="m_-1913954970336221303h5">&gt; Thread-6320717::INFO::2017-03-<wbr>24<br>
&gt; 11:41:13,019::migration::712::<wbr>virt.vm::(monitor_migration)<br>
&gt; vmId=`4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8`::Migration Progress: 190<br>
&gt; seconds elapsed, 98% of data processed, total data: 16456MB, processed<br>
&gt; data: 9842MB, remaining data: 386MB, transfer speed 52MBps, zero<br>
&gt; pages: 1718676MB, compressed: 0MB, dirty rate: -1, memory iteration: -1<br>
&gt; libvirtEventLoop::DEBUG::2017-<wbr>03-24<br>
&gt; 11:41:21,007::vm::4291::virt.v<wbr>m::(onLibvirtLifecycleEvent)<br>
&gt; vmId=`4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8`::event Suspended detail 0<br>
&gt; opaque None<br>
&gt; libvirtEventLoop::INFO::2017-0<wbr>3-24<br>
&gt; 11:41:21,025::vm::4815::virt.v<wbr>m::(_logGuestCpuStatus)<br>
&gt; vmId=`4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8`::CPU stopped: onSuspend<br>
&gt; libvirtEventLoop::DEBUG::2017-<wbr>03-24<br>
&gt; 11:41:21,069::vm::4291::virt.v<wbr>m::(onLibvirtLifecycleEvent)<br>
&gt; vmId=`4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8`::event Resumed detail 0<br>
&gt; opaque None<br>
&gt; libvirtEventLoop::INFO::2017-0<wbr>3-24<br>
&gt; 11:41:21,069::vm::4815::virt.v<wbr>m::(_logGuestCpuStatus)<br>
&gt; vmId=`4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8`::CPU running: onResume<br>
&gt; Thread-6320715::DEBUG::2017-03<wbr>-24<br>
&gt; 11:41:21,224::migration::715::<wbr>virt.vm::(stop)<br>
&gt; vmId=`4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8`::stopping migration<br>
&gt; monitor thread<br>
&gt; Thread-6320715::ERROR::2017-03<wbr>-24<br>
&gt; 11:41:21,225::migration::252::<wbr>virt.vm::(_recover)<br>
&gt; vmId=`4f627cc1-9b52-4eef-bf3a-<wbr>c02e8a6303b8`::operation failed:<br>
&gt; migration job: unexpectedly failed<br>
<br>
</div></div>This is surprising (no pun intended)<br>
With a pretty high chance this comes from libvirt, I&#39;m afraid you need<br>
to dig in the libvirt logs/journal entries to learn more.<br>
Vdsm could unfortunately do better than what it is already doing :\<br>
<span class="m_-1913954970336221303HOEnZb"><font color="#888888"><br>
--<br>
Francesco Romani<br>
Senior SW Eng., Virtualization R&amp;D<br>
Red Hat<br>
IRC: fromani github: @fromanirh<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
</font></span></blockquote></div><br><br clear="all"><br>-- <br><div class="m_-1913954970336221303gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Davide Ferrari<br></div>Senior Systems Engineer<br></div></div>
</div>
______________________________<wbr>_________________<br>Users mailing list<br><a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br><a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br></div></blockquote></div><br></div></div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>Davide Ferrari<br></div>Senior Systems Engineer<br></div></div>
</div>