VM won't start because of -incoming flag after upgrading to alpha-2

This is a multi-part message in MIME format. --------------040500060504070207030504 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Hi, After upgrading from alpha-1 to alpha-2 and restarting services, a VM won't start because of the following qemu error: Domain id=5 is tainted: hook-script 2015-07-07T07:39:27.215579Z qemu-kvm: Unknown migration flags: 0 qemu: warning: error while loading state section id 2 2015-07-07T07:39:27.215681Z qemu-kvm: load of migration failed: Invalid argument The reason is that Engine was starting the VM with the -incoming flag (for "incoming migrations"). 1) Why did engine enable the -incoming flag? VM was configured with "Allow manual migration only". Maybe this ocurred because I set the host in maintenance mode before upgrading. 2) How can I clean the -incoming flag in oVirt? 3) Note that the "domain is tainted" error was also freezing the monitor socket between libvirt and vdsm, requiring a libvirtd + vdsmd restart. --------------040500060504070207030504 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 7bit <html> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hi,<br> <br> After upgrading from alpha-1 to alpha-2 and restarting services, a VM won't start because of the following qemu error:<br> <blockquote>Domain id=5 is tainted: hook-script<br> 2015-07-07T07:39:27.215579Z qemu-kvm: Unknown migration flags: 0<br> qemu: warning: error while loading state section id 2<br> 2015-07-07T07:39:27.215681Z qemu-kvm: load of migration failed: Invalid argument<br> </blockquote> The reason is that Engine was starting the VM with the -incoming flag (for "incoming migrations").<br> <br> 1) Why did engine enable the -incoming flag?<br> VM was configured with "Allow manual migration only".<br> Maybe this ocurred because I set the host in maintenance mode before upgrading.<br> <br> 2) How can I clean the -incoming flag in oVirt?<br> <br> 3) Note that the "domain is tainted" error was also freezing the monitor socket between libvirt and vdsm, requiring a libvirtd + vdsmd restart.<br> <br> </body> </html> --------------040500060504070207030504--

----- Original Message -----
From: "Christopher Pereira" <kripper@imatronix.cl> To: devel@ovirt.org Sent: Tuesday, July 7, 2015 10:22:20 AM Subject: [ovirt-devel] VM won't start because of -incoming flag after upgrading to alpha-2
Hi,
After upgrading from alpha-1 to alpha-2 and restarting services, a VM won't start because of the following qemu error:
Hi,
Domain id=5 is tainted: hook-script 2015-07-07T07:39:27.215579Z qemu-kvm: Unknown migration flags: 0 qemu: warning: error while loading state section id 2 2015-07-07T07:39:27.215681Z qemu-kvm: load of migration failed: Invalid argument The reason is that Engine was starting the VM with the -incoming flag (for "incoming migrations").
1) Why did engine enable the -incoming flag?
It should not, it should be added by VDSM only when it creates a migration destination VM/
VM was configured with "Allow manual migration only". Maybe this ocurred because I set the host in maintenance mode before upgrading.
2) How can I clean the -incoming flag in oVirt?
Short answer: you cannot, but because it should not be added randomly
3) Note that the "domain is tainted" error was also freezing the monitor socket between libvirt and vdsm, requiring a libvirtd + vdsmd restart.
Could be QEMU issue. BTW, can you please share the VDSM log which includes the misbehaviour? Thanks, -- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani

On 07/07/15 12:50, Christopher Pereira wrote:
On 07-07-2015 6:04, Francesco Romani wrote:
It should not, it should be added by VDSM only when it creates a migration destination VM/ Well, something is triggered it. I will send you the logs right now.
According to libvirt docs, this is an internal flag, which is not exposed to the user- http://wiki.libvirt.org/page/QEMUSwitchToLibvirt#-incoming Can you please provide the libvirt log?

On 07-07-2015 6:59, Doron Fediuck wrote:
On 07/07/15 12:50, Christopher Pereira wrote:
Well, something is triggered it. I will send you the logs right now.
According to libvirt docs, this is an internal flag, which is not exposed to the user- http://wiki.libvirt.org/page/QEMUSwitchToLibvirt#-incoming
Can you please provide the libvirt log?
This are the libvirt/qemu logs. BTW, I executed a "virsh dumpxml" command before shutting down the services and there is no "-incoming" flag in the dump. [...] ((null):28066): Spice-Warning **: reds.c:2824:reds_handle_ssl_accept: SSL_accept failed, error=5 ((null):28066): Spice-Warning **: reds.c:2824:reds_handle_ssl_accept: SSL_accept failed, error=5 ((null):28066): Spice-Warning **: reds.c:2824:reds_handle_ssl_accept: SSL_accept failed, error=5 2015-07-07 05:09:56.817+0000: shutting down qemu: terminating on signal 15 from pid 2084 --- Starting via Engine --- 2015-07-07 05:31:32.792+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name test-vm -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m 4096 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -uuid 8c9437e4-5514-4c85-a52b-da33d9ab6061 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2.8,serial=32393735-3733-5355-4532-303957525946,uuid=8c9437e4-5514-4c85-a52b-da33d9ab6061 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/test-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-07-07T02:31:32,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/1963d7ab-a65b-4a70-8749-f45aceba2393/ebd94ac1-84df-47da-be87-ca49f7bffdcf/images/7fba2829-772b-43e0-9d47-0b164b2ac975/7b2102e5-5f97-4185-9c71-618187c6dee9,if=none,id=drive-virtio-disk0,format=qcow2,serial=7fba2829-772b-43e0-9d47-0b164b2ac975,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:54,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/8c9437e4-5514-4c85-a52b-da33d9ab6061.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/8c9437e4-5514-4c85-a52b-da33d9ab6061.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5900,tls-port=5901,addr=0,disable-ticketing,x509-dir=/etc/pki/vdsm/libvirt-spice,seamless-migration=on -vnc 0:2 -device qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vgamem_mb=16,bus=pci.0,addr=0x2 -incoming fd:26 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on Domain id=9 is tainted: hook-script 2015-07-07T05:31:39.798212Z qemu-kvm: Unknown migration flags: 0 qemu: warning: error while loading state section id 2 2015-07-07T05:31:39.798293Z qemu-kvm: load of migration failed: Invalid argument 2015-07-07 05:47:55.386+0000: shutting down [...] --- Starting via virsh define + start (from a dump) --- 2015-07-07 07:51:53.266+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name test-vm -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m 4096 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -uuid 8c9437e4-5514-4c85-a52b-da33d9ab6061 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2.8,serial=32393735-3733-5355-4532-303957525946,uuid=8c9437e4-5514-4c85-a52b-da33d9ab6061 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/test-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-07-07T04:51:53,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/1963d7ab-a65b-4a70-8749-f45aceba2393/ebd94ac1-84df-47da-be87-ca49f7bffdcf/images/7fba2829-772b-43e0-9d47-0b164b2ac975/7b2102e5-5f97-4185-9c71-618187c6dee9,if=none,id=drive-virtio-disk0,format=qcow2,serial=7fba2829-772b-43e0-9d47-0b164b2ac975,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:54,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/8c9437e4-5514-4c85-a52b-da33d9ab6061.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/8c9437e4-5514-4c85-a52b-da33d9ab6061.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5901,tls-port=5902,addr=0,disable-ticketing,x509-dir=/etc/pki/vdsm/libvirt-spice,seamless-migration=on -vnc 0:3 -device qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vgamem_mb=16,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on

----- Original Message -----
From: "Christopher Pereira" <kripper@imatronix.cl> To: "Doron Fediuck" <dfediuck@redhat.com>, "Francesco Romani" <fromani@redhat.com> Cc: devel@ovirt.org, "Roy Golan" <rgolan@redhat.com> Sent: Tuesday, July 7, 2015 12:07:20 PM Subject: Re: [ovirt-devel] VM won't start because of -incoming flag after upgrading to alpha-2
On 07-07-2015 6:59, Doron Fediuck wrote:
On 07/07/15 12:50, Christopher Pereira wrote:
Well, something is triggered it. I will send you the logs right now.
According to libvirt docs, this is an internal flag, which is not exposed to the user- http://wiki.libvirt.org/page/QEMUSwitchToLibvirt#-incoming
Can you please provide the libvirt log?
2015-07-07 05:31:32.792+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name test-vm -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m 4096 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -uuid 8c9437e4-5514-4c85-a52b-da33d9ab6061 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-1.1503.el7.centos.2.8,serial=32393735-3733-5355-4532-303957525946,uuid=8c9437e4-5514-4c85-a52b-da33d9ab6061 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/test-vm.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2015-07-07T02:31:32,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/1963d7ab-a65b-4a70-8749-f45aceba2393/ebd94ac1-84df-47da-be87-ca49f7bffdcf/images/7fba2829-772b-43e0-9d47-0b164b2ac975/7b2102e5-5f97-4185-9c71-618187c6dee9,if=none,id=drive-virtio-disk0,format=qcow2,serial=7fba2829-772b-43e0-9d47-0b164b2ac975,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:54,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/8c9437e4-5514-4c85-a52b-da33d9ab6061.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/8c9437e4-5514-4c85-a52b-da33d9ab6061.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5900,tls-port=5901,addr=0,disable-ticketing,x509-dir=/etc/pki/vdsm/libvirt-spice,seamless-migration=on -vnc 0:2 -device qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vgamem_mb=16,bus=pci.0,addr=0x2 -incoming fd:26
This means the VM is "migrating from file", aka resuming after a suspension. Any chance the VM was suspended while running some QEMU version and resumed using a different QEMU version?
--- Starting via virsh define + start (from a dump) --- [snip]
OK, so new boot and no resume, this explains why it starts OK. Bests, -- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani

On 07-07-2015 8:16, Francesco Romani wrote:
This means the VM is "migrating from file", aka resuming after a suspension. Any chance the VM was suspended while running some QEMU version and resumed using a different QEMU version?
Yes, the VM was suspended before updating and restarting services. It probably failed the first time I tried to resume and by some reason, the memory snapshot is not available any more.

----- Original Message -----
From: "Christopher Pereira" <kripper@imatronix.cl> To: "Francesco Romani" <fromani@redhat.com> Cc: "Doron Fediuck" <dfediuck@redhat.com>, devel@ovirt.org, "Roy Golan" <rgolan@redhat.com> Sent: Tuesday, July 7, 2015 1:32:40 PM Subject: Re: [ovirt-devel] VM won't start because of -incoming flag after upgrading to alpha-2
On 07-07-2015 8:16, Francesco Romani wrote:
This means the VM is "migrating from file", aka resuming after a suspension. Any chance the VM was suspended while running some QEMU version and resumed using a different QEMU version?
Yes, the VM was suspended before updating and restarting services. It probably failed the first time I tried to resume and by some reason, the memory snapshot is not available any more.
OK, then the scenarios are the following: If you suspended on qemu-kvm-ev version X and resumed on qemu-kvm-ev version Y (Y > X), then it should be supported, so please file a bug against qemu-kvm-ev Same goes for plain qemu. but if you suspended on qemu and resumed on qemu-kvm-ev, or vice versa, then I'm not sure this flow is supported (AFAIR, it isn't). -- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani

On 07-07-2015 8:36, Francesco Romani wrote:
----- Original Message -----
From: "Christopher Pereira" <kripper@imatronix.cl> To: "Francesco Romani" <fromani@redhat.com> Cc: "Doron Fediuck" <dfediuck@redhat.com>, devel@ovirt.org, "Roy Golan" <rgolan@redhat.com> Sent: Tuesday, July 7, 2015 1:32:40 PM Subject: Re: [ovirt-devel] VM won't start because of -incoming flag after upgrading to alpha-2
This means the VM is "migrating from file", aka resuming after a suspension. Any chance the VM was suspended while running some QEMU version and resumed using a different QEMU version? Yes, the VM was suspended before updating and restarting services. It probably failed the first time I tried to resume and by some reason,
On 07-07-2015 8:16, Francesco Romani wrote: the memory snapshot is not available any more. OK, then the scenarios are the following:
If you suspended on qemu-kvm-ev version X and resumed on qemu-kvm-ev version Y (Y > X), then it should be supported, so please file a bug against qemu-kvm-ev
Same goes for plain qemu.
but if you suspended on qemu and resumed on qemu-kvm-ev, or vice versa, then I'm not sure this flow is supported (AFAIR, it isn't).
'qemu-kvm-ev-2.1.2-23.el7_1.3.1.x86_64' was being used for both suspending and resuming. BZ created here: https://bugzilla.redhat.com/show_bug.cgi?id=1240649

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/07/15 13:36, Francesco Romani wrote:
but if you suspended on qemu and resumed on qemu-kvm-ev, or vice versa, then I'm not sure this flow is supported (AFAIR, it isn't).
I wonder why that should not be supported, afaik the only difference between qemu-kvm and qemu-kvm-ev is the enabled flag to allow live migrations. I can't see why this would introduce a flag of "-incoming" on the qemu command line. is there a technical reason why this is not supported? or is this just a "you moved from binary name a to b, so it's not supported" case, no matter the almost 100% exact same content within those binarys? - -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhaus en Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJVnMx/AAoJEMby9TMDAbQRxV8P/0Zy/nrMG6acKGX34IcMaveL 4l7SqJktjk/i/F520B7D08Ub6iZKsTMP4IrK2ZGV/uF11OfE8ERG3xxKxdAvymES RV+NoBJWWuZHX/6twF8caUqLopnun6q76gHgGQg0ynnECOxlgbnpKE1uLuHF2QZM sS2FlLGC8Zm7ycs+I+/M3ANrV8zXykud1OCl4Oj8Z3wn0lSa8+p4rXrfARs0yamO 4PKwvMJU9jYwf/ZhGO0z2+Qy9ZXofSt/G8OLljGnQlmwzUQvc/oNiENTmh3q+BwB 6/8GSALaAhnHL11VPqB26Os/0+HNp1DgH2mVoFGUa4NhxhVozyWu2Efh73p0Of0I 4lZyxsoUaCdAJg2omC7RCB8DKFZwKFmBfRUGrLtivUmAfpm7oaGV+t+YJaYO+/Fx oabApAHDTJeNA3Zq/QshnwfWg2jpbLBXIO8plJ2Ygwyi7jzgxK+WmUhuEAcm8Ab5 l6kp3ctnnhJW/VErhHnzVDtNHSEUT5c16JcwQniob4haRdtK9ANTqnUbwhjq+K7Q yqbSBiHvZ6LBkXBN1yFekEzFuzQFF954DQP4k/l5BAiJ3o9F/Abk5HZtzf3AeTG4 cL6fXOfrgZAv6iaKogNsETDj9guaqVzrr6FqG0hnGanVNxnwVz2ew4cecxcTF5j/ 777ezRh/zBRpV5Jufsrb =uccx -----END PGP SIGNATURE-----

On Jul 8, 2015, at 09:08 , Sven Kieske <S.Kieske@mittwald.de> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/07/15 13:36, Francesco Romani wrote:
but if you suspended on qemu and resumed on qemu-kvm-ev, or vice versa, then I'm not sure this flow is supported (AFAIR, it isn't).
I wonder why that should not be supported, afaik the only difference between qemu-kvm and qemu-kvm-ev is the enabled flag to allow live migrations.
I can't see why this would introduce a flag of "-incoming" on the qemu command line.
I don't think any wrong version would introduce such a thing. -incoming is used internally on all VMs incoming migrations, that includes a resume from suspended state - which is the case here. It's just that resume is failing as the stored state file is probably corrupted
is there a technical reason why this is not supported? or is this just a "you moved from binary name a to b, so it's not supported" case, no matter the almost 100% exact same content within those binarys?
When on the same version - mostly as you say, yes. But starting with RHEL 7 you may have noticed different major version of QEMU in base RHEL/CentOS and what we deliver as part of qemu-kvm-ev. It's more close to latest Fedora versions, yet it's significantly different
- -- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhaus en Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAEBAgAGBQJVnMx/AAoJEMby9TMDAbQRxV8P/0Zy/nrMG6acKGX34IcMaveL 4l7SqJktjk/i/F520B7D08Ub6iZKsTMP4IrK2ZGV/uF11OfE8ERG3xxKxdAvymES RV+NoBJWWuZHX/6twF8caUqLopnun6q76gHgGQg0ynnECOxlgbnpKE1uLuHF2QZM sS2FlLGC8Zm7ycs+I+/M3ANrV8zXykud1OCl4Oj8Z3wn0lSa8+p4rXrfARs0yamO 4PKwvMJU9jYwf/ZhGO0z2+Qy9ZXofSt/G8OLljGnQlmwzUQvc/oNiENTmh3q+BwB 6/8GSALaAhnHL11VPqB26Os/0+HNp1DgH2mVoFGUa4NhxhVozyWu2Efh73p0Of0I 4lZyxsoUaCdAJg2omC7RCB8DKFZwKFmBfRUGrLtivUmAfpm7oaGV+t+YJaYO+/Fx oabApAHDTJeNA3Zq/QshnwfWg2jpbLBXIO8plJ2Ygwyi7jzgxK+WmUhuEAcm8Ab5 l6kp3ctnnhJW/VErhHnzVDtNHSEUT5c16JcwQniob4haRdtK9ANTqnUbwhjq+K7Q yqbSBiHvZ6LBkXBN1yFekEzFuzQFF954DQP4k/l5BAiJ3o9F/Abk5HZtzf3AeTG4 cL6fXOfrgZAv6iaKogNsETDj9guaqVzrr6FqG0hnGanVNxnwVz2ew4cecxcTF5j/ 777ezRh/zBRpV5Jufsrb =uccx -----END PGP SIGNATURE----- _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

On Jul 8, 2015, at 09:22 , Michal Skrivanek <michal.skrivanek@redhat.com> wrote:
On Jul 8, 2015, at 09:08 , Sven Kieske <S.Kieske@mittwald.de> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/07/15 13:36, Francesco Romani wrote:
but if you suspended on qemu and resumed on qemu-kvm-ev, or vice versa, then I'm not sure this flow is supported (AFAIR, it isn't).
I wonder why that should not be supported, afaik the only difference between qemu-kvm and qemu-kvm-ev is the enabled flag to allow live migrations.
I can't see why this would introduce a flag of "-incoming" on the qemu command line.
I don't think any wrong version would introduce such a thing. -incoming is used internally on all VMs incoming migrations, that includes a resume from suspended state - which is the case here. It's just that resume is failing as the stored state file is probably corrupted
well, or a plain backend bug, that's always possible:)
is there a technical reason why this is not supported? or is this just a "you moved from binary name a to b, so it's not supported" case, no matter the almost 100% exact same content within those binarys?
When on the same version - mostly as you say, yes. But starting with RHEL 7 you may have noticed different major version of QEMU in base RHEL/CentOS and what we deliver as part of qemu-kvm-ev. It's more close to latest Fedora versions, yet it's significantly different
- -- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhaus en Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAEBAgAGBQJVnMx/AAoJEMby9TMDAbQRxV8P/0Zy/nrMG6acKGX34IcMaveL 4l7SqJktjk/i/F520B7D08Ub6iZKsTMP4IrK2ZGV/uF11OfE8ERG3xxKxdAvymES RV+NoBJWWuZHX/6twF8caUqLopnun6q76gHgGQg0ynnECOxlgbnpKE1uLuHF2QZM sS2FlLGC8Zm7ycs+I+/M3ANrV8zXykud1OCl4Oj8Z3wn0lSa8+p4rXrfARs0yamO 4PKwvMJU9jYwf/ZhGO0z2+Qy9ZXofSt/G8OLljGnQlmwzUQvc/oNiENTmh3q+BwB 6/8GSALaAhnHL11VPqB26Os/0+HNp1DgH2mVoFGUa4NhxhVozyWu2Efh73p0Of0I 4lZyxsoUaCdAJg2omC7RCB8DKFZwKFmBfRUGrLtivUmAfpm7oaGV+t+YJaYO+/Fx oabApAHDTJeNA3Zq/QshnwfWg2jpbLBXIO8plJ2Ygwyi7jzgxK+WmUhuEAcm8Ab5 l6kp3ctnnhJW/VErhHnzVDtNHSEUT5c16JcwQniob4haRdtK9ANTqnUbwhjq+K7Q yqbSBiHvZ6LBkXBN1yFekEzFuzQFF954DQP4k/l5BAiJ3o9F/Abk5HZtzf3AeTG4 cL6fXOfrgZAv6iaKogNsETDj9guaqVzrr6FqG0hnGanVNxnwVz2ew4cecxcTF5j/ 777ezRh/zBRpV5Jufsrb =uccx -----END PGP SIGNATURE----- _______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel

----- Original Message -----
From: "Sven Kieske" <s.kieske@mittwald.de> To: devel@ovirt.org Sent: Wednesday, July 8, 2015 9:08:47 AM Subject: Re: [ovirt-devel] [!!Mass Mail]Re: VM won't start because of -incoming flag after upgrading to alpha-2
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 07/07/15 13:36, Francesco Romani wrote:
but if you suspended on qemu and resumed on qemu-kvm-ev, or vice versa, then I'm not sure this flow is supported (AFAIR, it isn't).
I wonder why that should not be supported, afaik the only difference between qemu-kvm and qemu-kvm-ev is the enabled flag to allow live migrations.
I can't see why this would introduce a flag of "-incoming" on the qemu command line.
The flag -incoming is for supporting incoming migrations, either from file (aka resume from suspension) or from another qemu on another host. So it should'nt be affected by the qemu VS qemu-kvm-ev split. What I meant is that...
is there a technical reason why this is not supported? or is this just a "you moved from binary name a to b, so it's not supported" case, no matter the almost 100% exact same content within those binarys?
... AFAIR, in the patchset which is part of qemu-kvm-ev, a lot of devices are disabled for security/safety/auditing reasons, and new, stable machines are added (rhel*). When QEMU migrates, again to another qemu or to file, among other things it needs to freeze and serialize device state. The format of this device state can change across versions, even though it usually changes in forward compatible way. So, the problem *could* be that the resuming QEMU doesn't know how to handle some device, or cannot understand the stored format. This is the reason why *I believe* this flow is not supported. But of course the last word is on the QEMU(-kvm[-ev]) devs. -- Francesco Romani RedHat Engineering Virtualization R & D Phone: 8261328 IRC: fromani

This is a multi-part message in MIME format. --------------060800000400050202020906 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit On 08-07-2015 4:26, Francesco Romani wrote:
... AFAIR, in the patchset which is part of qemu-kvm-ev, a lot of devices are disabled for security/safety/auditing reasons, and new, stable machines are added (rhel*).
When QEMU migrates, again to another qemu or to file, among other things it needs to freeze and serialize device state. The format of this device state can change across versions, even though it usually changes in forward compatible way.
So, the problem *could* be that the resuming QEMU doesn't know how to handle some device, or cannot understand the stored format.
This is the reason why *I believe* this flow is not supported. But of course the last word is on the QEMU(-kvm[-ev]) devs.
Any chance that other upgrades [1] (ie. device-mapper) could invalidate the state file? Now, how can I tell Engine or VDSM to discard the state file (to not try reading the invalid state file during next startup) and recover this VM? [1] : This is the list of updated packages: Jul 07 00:11:31 Installed: ruby-libs-2.0.0.598-25.el7_1.x86_64 Jul 07 00:11:31 Updated: file-libs-5.11-22.el7.x86_64 Jul 07 00:11:31 Updated: file-5.11-22.el7.x86_64 Jul 07 00:11:31 Updated: 1:libguestfs-1.28.1-1.28.el7.x86_64 Jul 07 00:11:32 Updated: 1:libguestfs-tools-c-1.28.1-1.28.el7.x86_64 Jul 07 00:11:32 Installed: libyaml-0.1.4-11.el7_0.x86_64 Jul 07 00:11:32 Installed: rubygem-psych-2.0.0-25.el7_1.x86_64 Jul 07 00:11:32 Installed: rubygem-io-console-0.4.2-25.el7_1.x86_64 Jul 07 00:11:32 Installed: ruby-irb-2.0.0.598-25.el7_1.noarch Jul 07 00:11:32 Installed: ruby-2.0.0.598-25.el7_1.x86_64 Jul 07 00:11:32 Installed: rubygem-bigdecimal-1.2.0-25.el7_1.x86_64 Jul 07 00:11:32 Installed: rubygem-json-1.7.7-25.el7_1.x86_64 Jul 07 00:11:32 Installed: rubygems-2.0.14-25.el7_1.noarch Jul 07 00:11:32 Installed: rubygem-rdoc-4.0.0-25.el7_1.noarch Jul 07 00:11:32 Installed: unzip-6.0-15.el7.x86_64 Jul 07 00:11:32 Installed: perl-XML-Parser-2.41-10.el7.x86_64 Jul 07 00:11:32 Installed: perl-XML-XPath-1.13-22.el7.noarch Jul 07 00:11:32 Installed: 1:perl-Sys-Guestfs-1.28.1-1.28.el7.x86_64 Jul 07 00:11:33 Installed: 1:virt-v2v-1.28.1-1.28.el7.x86_64 Jul 07 00:11:33 Installed: 1:ruby-libguestfs-1.28.1-1.28.el7.x86_64 Jul 07 00:11:33 Installed: libguestfs-winsupport-7.1-4.el7.x86_64 Jul 07 00:11:33 Updated: python-magic-5.11-22.el7.noarch Jul 07 02:14:57 Updated: 1:openssl-libs-1.0.1e-42.el7.9.x86_64 Jul 07 02:14:58 Updated: glusterfs-libs-3.7.2-3.el7.x86_64 Jul 07 02:14:58 Updated: glusterfs-3.7.2-3.el7.x86_64 Jul 07 02:14:58 Updated: systemd-libs-208-20.el7_1.5.x86_64 Jul 07 02:15:00 Updated: systemd-208-20.el7_1.5.x86_64 Jul 07 02:15:00 Updated: trousers-0.3.11.2-4.el7_1.x86_64 Jul 07 02:15:00 Updated: dracut-033-241.el7_1.3.x86_64 Jul 07 02:15:00 Updated: glusterfs-client-xlators-3.7.2-3.el7.x86_64 Jul 07 02:15:00 Updated: nss-util-3.19.1-1.el7_1.x86_64 Jul 07 02:15:00 Updated: glusterfs-fuse-3.7.2-3.el7.x86_64 Jul 07 02:15:00 Updated: glusterfs-api-3.7.2-3.el7.x86_64 Jul 07 02:15:00 Updated: gnutls-3.3.8-12.el7_1.1.x86_64 Jul 07 02:15:00 Updated: 7:device-mapper-libs-1.02.93-3.el7_1.1.x86_64 Jul 07 02:15:00 Updated: 7:device-mapper-1.02.93-3.el7_1.1.x86_64 Jul 07 02:15:00 Updated: 7:device-mapper-event-libs-1.02.93-3.el7_1.1.x86_64 Jul 07 02:15:00 Updated: glusterfs-cli-3.7.2-3.el7.x86_64 Jul 07 02:15:01 Updated: 7:device-mapper-event-1.02.93-3.el7_1.1.x86_64 Jul 07 02:15:01 Updated: 7:lvm2-libs-2.02.115-3.el7_1.1.x86_64 Jul 07 02:15:01 Updated: 7:lvm2-2.02.115-3.el7_1.1.x86_64 Jul 07 02:15:01 Updated: gnutls-dane-3.3.8-12.el7_1.1.x86_64 Jul 07 02:15:01 Updated: gnutls-utils-3.3.8-12.el7_1.1.x86_64 Jul 07 02:15:01 Updated: nss-3.19.1-3.el7_1.x86_64 Jul 07 02:15:01 Updated: nss-sysinit-3.19.1-3.el7_1.x86_64 Jul 07 02:15:07 Installed: kernel-3.10.0-229.7.2.el7.x86_64 Jul 07 02:15:07 Updated: iputils-20121221-6.el7_1.1.x86_64 Jul 07 02:15:07 Updated: ntpdate-4.2.6p5-19.el7.centos.1.x86_64 Jul 07 02:15:07 Updated: ntp-4.2.6p5-19.el7.centos.1.x86_64 Jul 07 02:15:09 Updated: python-libs-2.7.5-18.el7_1.1.x86_64 Jul 07 02:15:09 Updated: python-2.7.5-18.el7_1.1.x86_64 Jul 07 02:15:09 Updated: fence-agents-common-4.0.11-13.el7_1.x86_64 Jul 07 02:15:09 Updated: otopi-1.4.0-0.0.master.20150625083848.gite93fa23.el7.noarch Jul 07 02:15:16 Updated: glusterfs-server-3.7.2-3.el7.x86_64 Jul 07 02:15:16 Updated: vdsm-infra-4.17.0-1054.git562e711.el7.noarch Jul 07 02:15:16 Updated: vdsm-python-4.17.0-1054.git562e711.el7.noarch Jul 07 02:15:16 Updated: vdsm-xmlrpc-4.17.0-1054.git562e711.el7.noarch Jul 07 02:15:16 Updated: vdsm-cli-4.17.0-1054.git562e711.el7.noarch Jul 07 02:15:18 Updated: glusterfs-geo-replication-3.7.2-3.el7.x86_64 Jul 07 02:15:19 Updated: ovirt-host-deploy-1.4.0-0.0.master.20150617062825.git06a8f80.el7.noarch Jul 07 02:15:19 Updated: fence-agents-ipdu-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-eps-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-apc-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-ilo-ssh-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-bladecenter-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-cisco-ucs-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-ilo2-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-apc-snmp-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-vmware-soap-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-scsi-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-eaton-snmp-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-wti-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-ibmblade-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Installed: fence-agents-compute-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-intelmodular-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-hpblade-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-ipmilan-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-rhevm-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-ifmib-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-ilo-mp-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-drac5-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-brocade-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-rsb-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-kdump-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-cisco-mds-4.0.11-13.el7_1.x86_64 Jul 07 02:15:19 Updated: fence-agents-all-4.0.11-13.el7_1.x86_64 Jul 07 02:15:46 Installed: ovirt-vmconsole-1.0.0-0.0.master.20150616120945.gitc1fb2bd.el7.noarch Jul 07 02:15:46 Updated: sos-3.2-15.el7.centos.5.noarch Jul 07 02:15:46 Updated: ovirt-engine-sdk-python-3.6.0.0-0.15.20150625.gitfc90daf.el7.centos.noarch Jul 07 02:15:46 Updated: mom-0.4.5-2.el7.noarch Jul 07 02:15:46 Updated: vdsm-yajsonrpc-4.17.0-1054.git562e711.el7.noarch Jul 07 02:15:46 Updated: vdsm-jsonrpc-4.17.0-1054.git562e711.el7.noarch Jul 07 02:15:47 Updated: 1:openssl-1.0.1e-42.el7.9.x86_64 Jul 07 02:15:47 Updated: kernel-tools-libs-3.10.0-229.7.2.el7.x86_64 Jul 07 02:15:47 Updated: selinux-policy-3.13.1-23.el7_1.8.noarch Jul 07 02:16:01 Updated: selinux-policy-targeted-3.13.1-23.el7_1.8.noarch Jul 07 02:16:01 Updated: vdsm-4.17.0-1054.git562e711.el7.noarch Jul 07 02:16:01 Updated: vdsm-gluster-4.17.0-1054.git562e711.el7.noarch Jul 07 02:16:01 Updated: ovirt-hosted-engine-ha-1.3.0-0.0.master.20150615153650.20150615153645.git5f8c290.el7.noarch Jul 07 02:16:02 Updated: tzdata-java-2015e-1.el7.noarch Jul 07 02:16:06 Updated: 1:java-1.7.0-openjdk-headless-1.7.0.79-2.5.5.2.el7_1.x86_64 Jul 07 02:16:06 Updated: 1:java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1.x86_64 Jul 07 02:16:07 Updated: ovirt-hosted-engine-setup-1.3.0-0.0.master.20150623153111.git68138d4.el7.noarch Jul 07 02:16:07 Updated: kernel-tools-3.10.0-229.7.2.el7.x86_64 Jul 07 02:16:07 Updated: systemd-sysv-208-20.el7_1.5.x86_64 Jul 07 02:16:07 Updated: dracut-network-033-241.el7_1.3.x86_64 Jul 07 02:16:07 Updated: nss-tools-3.19.1-3.el7_1.x86_64 Jul 07 02:16:07 Updated: dracut-config-rescue-033-241.el7_1.3.x86_64 Jul 07 02:16:07 Updated: libgudev1-208-20.el7_1.5.x86_64 Jul 07 02:16:07 Updated: mdadm-3.3.2-2.el7_1.1.x86_64 Jul 07 02:16:07 Updated: glusterfs-rdma-3.7.2-3.el7.x86_64 Jul 07 02:16:08 Updated: tzdata-2015e-1.el7.noarch Jul 07 04:23:21 Installed: libvirt-daemon-driver-lxc-1.2.8-16.el7_1.3.x86_64 Jul 07 06:06:50 Installed: strace-4.8-7.el7.x86_64 Jul 07 07:44:59 Installed: iotop-0.6-2.el7.noarch --------------060800000400050202020906 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <br> <div class="moz-signature"> <style> .signature, .small-signature { font-family:"Calibri","sans-serif";mso-fareast-font-family:"Times New Roman"; color:#7F7F7F; } .signature { font-size:10pt; } .small-signature { font-size:8pt; }</style></div> <div class="moz-cite-prefix">On 08-07-2015 4:26, Francesco Romani wrote:<br> </div> <blockquote cite="mid:1027695025.24746192.1436340384342.JavaMail.zimbra@redhat.com" type="cite"> <pre wrap="">... AFAIR, in the patchset which is part of qemu-kvm-ev, a lot of devices are disabled for security/safety/auditing reasons, and new, stable machines are added (rhel*). When QEMU migrates, again to another qemu or to file, among other things it needs to freeze and serialize device state. The format of this device state can change across versions, even though it usually changes in forward compatible way. So, the problem *could* be that the resuming QEMU doesn't know how to handle some device, or cannot understand the stored format. This is the reason why *I believe* this flow is not supported. But of course the last word is on the QEMU(-kvm[-ev]) devs. </pre> </blockquote> Any chance that other upgrades [1] (ie. device-mapper) could invalidate the state file?<br> <br> Now, how can I tell Engine or VDSM to discard the state file (to not try reading the invalid state file during next startup) and recover this VM?<br> <br> [1] : This is the list of updated packages:<br> <br> Jul 07 00:11:31 Installed: ruby-libs-2.0.0.598-25.el7_1.x86_64<br> Jul 07 00:11:31 Updated: file-libs-5.11-22.el7.x86_64<br> Jul 07 00:11:31 Updated: file-5.11-22.el7.x86_64<br> Jul 07 00:11:31 Updated: 1:libguestfs-1.28.1-1.28.el7.x86_64<br> Jul 07 00:11:32 Updated: 1:libguestfs-tools-c-1.28.1-1.28.el7.x86_64<br> Jul 07 00:11:32 Installed: libyaml-0.1.4-11.el7_0.x86_64<br> Jul 07 00:11:32 Installed: rubygem-psych-2.0.0-25.el7_1.x86_64<br> Jul 07 00:11:32 Installed: rubygem-io-console-0.4.2-25.el7_1.x86_64<br> Jul 07 00:11:32 Installed: ruby-irb-2.0.0.598-25.el7_1.noarch<br> Jul 07 00:11:32 Installed: ruby-2.0.0.598-25.el7_1.x86_64<br> Jul 07 00:11:32 Installed: rubygem-bigdecimal-1.2.0-25.el7_1.x86_64<br> Jul 07 00:11:32 Installed: rubygem-json-1.7.7-25.el7_1.x86_64<br> Jul 07 00:11:32 Installed: rubygems-2.0.14-25.el7_1.noarch<br> Jul 07 00:11:32 Installed: rubygem-rdoc-4.0.0-25.el7_1.noarch<br> Jul 07 00:11:32 Installed: unzip-6.0-15.el7.x86_64<br> Jul 07 00:11:32 Installed: perl-XML-Parser-2.41-10.el7.x86_64<br> Jul 07 00:11:32 Installed: perl-XML-XPath-1.13-22.el7.noarch<br> Jul 07 00:11:32 Installed: 1:perl-Sys-Guestfs-1.28.1-1.28.el7.x86_64<br> Jul 07 00:11:33 Installed: 1:virt-v2v-1.28.1-1.28.el7.x86_64<br> Jul 07 00:11:33 Installed: 1:ruby-libguestfs-1.28.1-1.28.el7.x86_64<br> Jul 07 00:11:33 Installed: libguestfs-winsupport-7.1-4.el7.x86_64<br> Jul 07 00:11:33 Updated: python-magic-5.11-22.el7.noarch<br> Jul 07 02:14:57 Updated: 1:openssl-libs-1.0.1e-42.el7.9.x86_64<br> Jul 07 02:14:58 Updated: glusterfs-libs-3.7.2-3.el7.x86_64<br> Jul 07 02:14:58 Updated: glusterfs-3.7.2-3.el7.x86_64<br> Jul 07 02:14:58 Updated: systemd-libs-208-20.el7_1.5.x86_64<br> Jul 07 02:15:00 Updated: systemd-208-20.el7_1.5.x86_64<br> Jul 07 02:15:00 Updated: trousers-0.3.11.2-4.el7_1.x86_64<br> Jul 07 02:15:00 Updated: dracut-033-241.el7_1.3.x86_64<br> Jul 07 02:15:00 Updated: glusterfs-client-xlators-3.7.2-3.el7.x86_64<br> Jul 07 02:15:00 Updated: nss-util-3.19.1-1.el7_1.x86_64<br> Jul 07 02:15:00 Updated: glusterfs-fuse-3.7.2-3.el7.x86_64<br> Jul 07 02:15:00 Updated: glusterfs-api-3.7.2-3.el7.x86_64<br> Jul 07 02:15:00 Updated: gnutls-3.3.8-12.el7_1.1.x86_64<br> Jul 07 02:15:00 Updated: 7:device-mapper-libs-1.02.93-3.el7_1.1.x86_64<br> Jul 07 02:15:00 Updated: 7:device-mapper-1.02.93-3.el7_1.1.x86_64<br> Jul 07 02:15:00 Updated: 7:device-mapper-event-libs-1.02.93-3.el7_1.1.x86_64<br> Jul 07 02:15:00 Updated: glusterfs-cli-3.7.2-3.el7.x86_64<br> Jul 07 02:15:01 Updated: 7:device-mapper-event-1.02.93-3.el7_1.1.x86_64<br> Jul 07 02:15:01 Updated: 7:lvm2-libs-2.02.115-3.el7_1.1.x86_64<br> Jul 07 02:15:01 Updated: 7:lvm2-2.02.115-3.el7_1.1.x86_64<br> Jul 07 02:15:01 Updated: gnutls-dane-3.3.8-12.el7_1.1.x86_64<br> Jul 07 02:15:01 Updated: gnutls-utils-3.3.8-12.el7_1.1.x86_64<br> Jul 07 02:15:01 Updated: nss-3.19.1-3.el7_1.x86_64<br> Jul 07 02:15:01 Updated: nss-sysinit-3.19.1-3.el7_1.x86_64<br> Jul 07 02:15:07 Installed: kernel-3.10.0-229.7.2.el7.x86_64<br> Jul 07 02:15:07 Updated: iputils-20121221-6.el7_1.1.x86_64<br> Jul 07 02:15:07 Updated: ntpdate-4.2.6p5-19.el7.centos.1.x86_64<br> Jul 07 02:15:07 Updated: ntp-4.2.6p5-19.el7.centos.1.x86_64<br> Jul 07 02:15:09 Updated: python-libs-2.7.5-18.el7_1.1.x86_64<br> Jul 07 02:15:09 Updated: python-2.7.5-18.el7_1.1.x86_64<br> Jul 07 02:15:09 Updated: fence-agents-common-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:09 Updated: otopi-1.4.0-0.0.master.20150625083848.gite93fa23.el7.noarch<br> Jul 07 02:15:16 Updated: glusterfs-server-3.7.2-3.el7.x86_64<br> Jul 07 02:15:16 Updated: vdsm-infra-4.17.0-1054.git562e711.el7.noarch<br> Jul 07 02:15:16 Updated: vdsm-python-4.17.0-1054.git562e711.el7.noarch<br> Jul 07 02:15:16 Updated: vdsm-xmlrpc-4.17.0-1054.git562e711.el7.noarch<br> Jul 07 02:15:16 Updated: vdsm-cli-4.17.0-1054.git562e711.el7.noarch<br> Jul 07 02:15:18 Updated: glusterfs-geo-replication-3.7.2-3.el7.x86_64<br> Jul 07 02:15:19 Updated: ovirt-host-deploy-1.4.0-0.0.master.20150617062825.git06a8f80.el7.noarch<br> Jul 07 02:15:19 Updated: fence-agents-ipdu-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-eps-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-apc-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-ilo-ssh-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-bladecenter-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-cisco-ucs-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-ilo2-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-apc-snmp-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-vmware-soap-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-scsi-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-eaton-snmp-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-wti-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-ibmblade-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Installed: fence-agents-compute-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-intelmodular-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-hpblade-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-ipmilan-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-rhevm-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-ifmib-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-ilo-mp-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-drac5-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-brocade-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-rsb-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-kdump-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-cisco-mds-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:19 Updated: fence-agents-all-4.0.11-13.el7_1.x86_64<br> Jul 07 02:15:46 Installed: ovirt-vmconsole-1.0.0-0.0.master.20150616120945.gitc1fb2bd.el7.noarch<br> Jul 07 02:15:46 Updated: sos-3.2-15.el7.centos.5.noarch<br> Jul 07 02:15:46 Updated: ovirt-engine-sdk-python-3.6.0.0-0.15.20150625.gitfc90daf.el7.centos.noarch<br> Jul 07 02:15:46 Updated: mom-0.4.5-2.el7.noarch<br> Jul 07 02:15:46 Updated: vdsm-yajsonrpc-4.17.0-1054.git562e711.el7.noarch<br> Jul 07 02:15:46 Updated: vdsm-jsonrpc-4.17.0-1054.git562e711.el7.noarch<br> Jul 07 02:15:47 Updated: 1:openssl-1.0.1e-42.el7.9.x86_64<br> Jul 07 02:15:47 Updated: kernel-tools-libs-3.10.0-229.7.2.el7.x86_64<br> Jul 07 02:15:47 Updated: selinux-policy-3.13.1-23.el7_1.8.noarch<br> Jul 07 02:16:01 Updated: selinux-policy-targeted-3.13.1-23.el7_1.8.noarch<br> Jul 07 02:16:01 Updated: vdsm-4.17.0-1054.git562e711.el7.noarch<br> Jul 07 02:16:01 Updated: vdsm-gluster-4.17.0-1054.git562e711.el7.noarch<br> Jul 07 02:16:01 Updated: ovirt-hosted-engine-ha-1.3.0-0.0.master.20150615153650.20150615153645.git5f8c290.el7.noarch<br> Jul 07 02:16:02 Updated: tzdata-java-2015e-1.el7.noarch<br> Jul 07 02:16:06 Updated: 1:java-1.7.0-openjdk-headless-1.7.0.79-2.5.5.2.el7_1.x86_64<br> Jul 07 02:16:06 Updated: 1:java-1.7.0-openjdk-1.7.0.79-2.5.5.2.el7_1.x86_64<br> Jul 07 02:16:07 Updated: ovirt-hosted-engine-setup-1.3.0-0.0.master.20150623153111.git68138d4.el7.noarch<br> Jul 07 02:16:07 Updated: kernel-tools-3.10.0-229.7.2.el7.x86_64<br> Jul 07 02:16:07 Updated: systemd-sysv-208-20.el7_1.5.x86_64<br> Jul 07 02:16:07 Updated: dracut-network-033-241.el7_1.3.x86_64<br> Jul 07 02:16:07 Updated: nss-tools-3.19.1-3.el7_1.x86_64<br> Jul 07 02:16:07 Updated: dracut-config-rescue-033-241.el7_1.3.x86_64<br> Jul 07 02:16:07 Updated: libgudev1-208-20.el7_1.5.x86_64<br> Jul 07 02:16:07 Updated: mdadm-3.3.2-2.el7_1.1.x86_64<br> Jul 07 02:16:07 Updated: glusterfs-rdma-3.7.2-3.el7.x86_64<br> Jul 07 02:16:08 Updated: tzdata-2015e-1.el7.noarch<br> Jul 07 04:23:21 Installed: libvirt-daemon-driver-lxc-1.2.8-16.el7_1.3.x86_64<br> Jul 07 06:06:50 Installed: strace-4.8-7.el7.x86_64<br> Jul 07 07:44:59 Installed: iotop-0.6-2.el7.noarch<br> <br> </body> </html> --------------060800000400050202020906--

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 08/07/15 09:26, Francesco Romani wrote:
... AFAIR, in the patchset which is part of qemu-kvm-ev, a lot of devices are disabled for security/safety/auditing reasons, and new, stable machines are added (rhel*).
Thanks Franceso and Michal for clearing this up. I really thought that the only technical difference is live migration support, I didn't know that there where further tweaks to qemu-kvm-ev. - -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +495772 293100 F: +495772 293333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhaus en Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iQIcBAEBAgAGBQJVnNwnAAoJEMby9TMDAbQR7xUQANjTJHwEuLIGyMuaGtl8AnhP cjeOfmbc7858vxZ6PMmLd1xruds+XbOh1d/7h3mpEV8rYkJyRvXHUFvzdnAvsqXP u/Ml6HY7zis+1ucrrAScZGb7javZohY1nnoG6+hqzm85IKc9SMLu5IeKPfZslFAj I6taYvxl9/djZnp1Mbo0E1NyNqpriGHySZSI9+3cntJFU8eZysQmflBjxkQlsaI8 fIpmLDxFSWVxnttMlWhDkP0tVPfsr3IkVTpi1UwGur9P8gp3S1bH7W1FWWUUpwxj K3EOVgMVf+av8syEsfmn9BGXkYqxrwTaVkr0yY4P0nknc+Qqn8AGZGmfLhsH6F6L UjDDDO81wNUaFNBFlhF9j0XdiM7NF6H0vBQmzxGmPJ68sP/4cK7qdD4H3ZCKqmEm iXfX9jkMEI1abOoAOILcNP3IiwygahkOyUFp++CTeu9MkuaGwLQ0758qXdSTicCv m+yjqd2BPUhiz7nZQC55PBcjZqLriPqHjF+Z8Ys02Fevljvi2vf2anAsPBqy7Mbv Lw85au30g6TQbiOWDxvWhusfzt5ksLkTLS8MWFJCXnQS8moK1BLqfup83hfTXR4Q urH661yc/BjR8CQfnNh8pa+RoAVshmyXnY3Rd+5zcCqoupe4UYRRbxTBGVJzILWp vcfLIsnvsu0DsZApjeOK =YZ2e -----END PGP SIGNATURE-----
participants (5)
-
Christopher Pereira
-
Doron Fediuck
-
Francesco Romani
-
Michal Skrivanek
-
Sven Kieske