Re: [Users] failed nestd vm only with spice and not vnc

Hello, I revamp this thread putting a subject more in line with the real problem. Previous thread subject was " unable to start vm in 3.3 and f19 with gluster " and began here on ovirt users mailing list: http://lists.ovirt.org/pipermail/users/2013-September/016628.html Now I updated all to final 3.3.3 and I see that the problem is here yet. So now I have updated Fedora 19 hosts that are VMs (virtual hw version 9) inside vSphere infra version 5.1. CPU of ESX host is E7-4870 and cluster in oVirt is defined as "Intel Nehalem Family" On oVirt host VM [root@ovnode01 qemu]# rpm -q libvirt qemu-kvm libvirt-1.0.5.9-1.fc19.x86_64 qemu-kvm-1.4.2-15.fc19.x86_64 [root@ovnode01 qemu]# uname -r 3.12.9-201.fc19.x86_64 flags of cpuinfo: flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni monitor vmx ssse3 cx16 sse4_1 sse4_2 x2apic popcnt lahf_lm ida arat epb dtherm tpr_shadow vnmi ept vpid [root@ovnode01 ~]# vdsClient -s localhost getVdsCapabilities HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:6344c23973df'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:6344c23973df' bondings = {'bond0': {'addr': '', 'cfg': {}, 'hwaddr': '32:5c:6a:20:cd:21', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}} bridges = {'ovirtmgmt': {'addr': '192.168.33.41', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '192.168.33.15', 'IPADDR': '192.168.33.41', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '192.168.33.15', 'ipv6addrs': ['fe80::250:56ff:fe9f:686b/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['eth0', 'vnet1'], 'stp': 'off'}, 'vlan172': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'DELAY': '0', 'DEVICE': 'vlan172', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '0.0.0.0', 'ipv6addrs': ['fe80::250:56ff:fe9f:3b86/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ports': ['ens256.172', 'vnet0'], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3'] cpuCores = '4' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,mmx,fxsr,sse,sse2,ss,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,nopl,xtopology,tsc_reliable,nonstop_tsc,aperfmperf,pni,monitor,vmx,ssse3,cx16,sse4_1,sse4_2,x2apic,popcnt,lahf_lm,ida,arat,epb,dtherm,tpr_shadow,vnmi,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz' cpuSockets = '4' cpuSpeed = '2394.000' cpuThreads = '4' emulatedMachines = ['pc', 'q35', 'isapc', 'pc-0.10', 'pc-0.11', 'pc-0.12', 'pc-0.13', 'pc-0.14', 'pc-0.15', 'pc-1.0', 'pc-1.1', 'pc-1.2', 'pc-1.3', 'none'] guestOverhead = '65' hooks = {} kvmEnabled = 'true' lastClient = '127.0.0.1' lastClientIface = 'lo' management_ip = '0.0.0.0' memSize = '16050' netConfigDirty = 'False' networks = {'ovirtmgmt': {'addr': '192.168.33.41', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '192.168.33.15', 'IPADDR': '192.168.33.41', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '192.168.33.15', 'iface': 'ovirtmgmt', 'ipv6addrs': ['fe80::250:56ff:fe9f:686b/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['eth0', 'vnet1'], 'qosInbound': '', 'qosOutbound': '', 'stp': 'off'}, 'vlan172': {'addr': '', 'bridged': True, 'cfg': {'DEFROUTE': 'no', 'DELAY': '0', 'DEVICE': 'vlan172', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '0.0.0.0', 'iface': 'vlan172', 'ipv6addrs': ['fe80::250:56ff:fe9f:3b86/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ports': ['ens256.172', 'vnet0'], 'qosInbound': '', 'qosOutbound': '', 'stp': 'off'}} nics = {'ens224': {'addr': '192.168.230.31', 'cfg': {'BOOTPROTO': 'static', 'DEVICE': 'ens224', 'HWADDR': '00:50:56:9F:3C:B0', 'IPADDR': '192.168.230.31', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'TYPE': 'Ethernet'}, 'hwaddr': '00:50:56:9f:3c:b0', 'ipv6addrs': ['fe80::250:56ff:fe9f:3cb0/64'], 'mtu': '1500', 'netmask': '255.255.255.0', 'speed': 10000}, 'ens256': {'addr': '', 'cfg': {'DEVICE': 'ens256', 'HWADDR': '00:50:56:9f:3b:86', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:50:56:9f:3b:86', 'ipv6addrs': ['fe80::250:56ff:fe9f:3b86/64'], 'mtu': '1500', 'netmask': '', 'speed': 10000}, 'eth0': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'DEVICE': 'eth0', 'HWADDR': '00:50:56:9f:68:6b', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:50:56:9f:68:6b', 'ipv6addrs': ['fe80::250:56ff:fe9f:686b/64'], 'mtu': '1500', 'netmask': '', 'speed': 10000}} operatingSystem = {'name': 'Fedora', 'release': '6', 'version': '19'} packages2 = {'kernel': {'buildtime': 1391006675.0, 'release': '201.fc19.x86_64', 'version': '3.12.9'}, 'libvirt': {'buildtime': 1389924902, 'release': '1.fc19', 'version': '1.0.5.9'}, 'mom': {'buildtime': 1385055339, 'release': '6.fc19', 'version': '0.3.2'}, 'qemu-img': {'buildtime': 1387388596, 'release': '15.fc19', 'version': '1.4.2'}, 'qemu-kvm': {'buildtime': 1387388596, 'release': '15.fc19', 'version': '1.4.2'}, 'spice-server': {'buildtime': 1383130020, 'release': '3.fc19', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1391430691, 'release': '3.fc19', 'version': '4.13.3'}} reservedMem = '321' software_revision = '3' software_version = '4.13' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3'] supportedProtocols = ['2.2', '2.3'] uuid = '421F7170-C703-34E3-9628-4588D841F8B1' version_name = 'Snow Man' vlans = {'ens256.172': {'addr': '', 'cfg': {'BRIDGE': 'vlan172', 'DEVICE': 'ens256.172', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'VLAN': 'yes'}, 'iface': 'ens256', 'ipv6addrs': ['fe80::250:56ff:fe9f:3b86/64'], 'mtu': '1500', 'netmask': '', 'vlanid': 172}} vmTypes = ['kvm'] I have a pre-booted VM that is configured as VNC. As soon as I start another VM (CentOS 6.4) defined as spice console all the two go into paused mode In qemulog of spice VM I have 2014-02-05 08:05:45.965+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/q emu-kvm -name C2prealloc -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid 1107ce34-46e6-4989-a5cf-de601ea71cae -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-6,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=1107ce34-46e6-4989-a5cf-de601ea71cae -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/C2prealloc.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-02-05T08:05:45,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/glusterSD/ovnode01:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/e8a52eea-5531-4d12-8747-061c2136b6fd/14707e58-aedf-4059-a815-605a0df4b396,if=none,id=drive-virtio-disk0,format=raw,serial=e8a52eea-5531-4d12-8747-061c2136b6fd,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:19,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/1107ce34-46e6-4989-a5cf-de601ea71cae.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/1107ce34-46e6-4989-a5cf-de601ea71cae.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel 1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 KVM: unknown exit, hardware reason 3 EAX=00000037 EBX=00006e44 ECX=0000001a EDX=00000511 ESI=00000000 EDI=00006df8 EBP=00006e08 ESP=00006dd4 EIP=3ffe1464 EFL=00000017 [----APC] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] CS =0008 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA] SS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] DS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] FS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] GS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy GDT= 000fd3a8 00000037 IDT= 000fd3e6 00000000 CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000 DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 DR6=00000000ffff0ff0 DR7=0000000000000400 EFER=0000000000000000 Code=eb be 83 c4 08 5b 5e 5f 5d c3 89 c1 ba 11 05 00 00 eb 01 ec <49> 83 f9 ff 75 f9 c3 57 56 53 89 c3 8b b0 84 00 00 00 39 ce 77 1e 89 d7 0f b7 80 8c 00 00 In messages I get when I start the spice VM: Feb 5 09:05:46 ovnode01 vdsm vm.Vm WARNING vmId=`1107ce34-46e6-4989-a5cf-de601ea71cae`::_readPauseCode unsupported by libvirt vm In VNC VM qemu.log, when I started it yestaerday: 2014-02-04 23:56:48.635+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name C6 -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid 409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-6,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-02-04T23:56:48,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/glusterSD/ovnode01:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 0:0,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 when I start the spice one KVM: unknown exit, hardware reason 3 EAX=00000011 EBX=0000ffea ECX=00000000 EDX=000fc5b9 ESI=000d7c2a EDI=00000000 EBP=00000000 ESP=00006f80 EIP=0000c489 EFL=00000006 [-----P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0000 00000000 ffffffff 00809300 DPL=0 DS16 [-WA] CS =f000 000f0000 ffffffff 00809b00 DPL=0 CS16 [-RA] SS =0000 00000000 ffffffff 00809300 DPL=0 DS16 [-WA] DS =0000 00000000 ffffffff 00809300 DPL=0 DS16 [-WA] FS =0000 00000000 ffffffff 00809300 DPL=0 DS16 [-WA] GS =0000 00000000 ffffffff 00809300 DPL=0 DS16 [-WA] LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy GDT= 000fd3a8 00000037 IDT= 000fd3e6 00000000 CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000 DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 DR6=00000000ffff0ff0 DR7=0000000000000400 EFER=0000000000000000 Code=01 1e e0 d3 2e 0f 01 16 a0 d3 0f 20 c0 66 83 c8 01 0f 22 c0 <66> ea 91 c4 0f 00 08 00 b8 10 00 00 00 8e d8 8e c0 8e d0 8e e0 8e e8 89 c8 ff e2 89 c1 b8 Thanks in advance, Gianluca On Thu, Oct 3, 2013 at 2:54 PM, Itamar Heim <iheim@redhat.com> wrote:
On 10/03/2013 01:21 AM, Gianluca Cecchi wrote:
On Wed, Oct 2, 2013 at 9:16 PM, Itamar Heim wrote:
On 10/02/2013 12:57 AM, Gianluca Cecchi wrote:
Today I was able to work again on this matter and it seems related to spice Every time I start the VM (that is defined with spice) it goes in
and this doesn't happen if the VM is defined with vnc?
No, reproduced both from oVirt and through virsh. with spice defined in boot options or in xml (for virsh) the vm remains in paused state and after a few minutes it seems the node hangs... with vnc the VM goes in runnign state I'm going to put same config on 2 physical nodes with only local storage and see what happens and report...
Gianluca
adding spice-devel mailing list as the VM only hangs if started with spice and not with vnc, from virsh as well.

I replicated the problem with same environment but attached to iSCSI storage domain. So the Gluster part is not involved. As soon as I run once a VM on the host the VM goes into paused state and in host messages: Feb 5 19:22:45 localhost kernel: [16851.192234] cgroup: libvirtd (1460) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. Feb 5 19:22:45 localhost kernel: [16851.192240] cgroup: "memory" requires setting use_hierarchy to 1 on the root. Feb 5 19:22:46 localhost kernel: [16851.228204] device vnet0 entered promiscuous mode Feb 5 19:22:46 localhost kernel: [16851.236198] ovirtmgmt: port 2(vnet0) entered forwarding state Feb 5 19:22:46 localhost kernel: [16851.236208] ovirtmgmt: port 2(vnet0) entered forwarding state Feb 5 19:22:46 localhost kernel: [16851.591058] qemu-system-x86: sending ioctl 5326 to a partition! Feb 5 19:22:46 localhost kernel: [16851.591074] qemu-system-x86: sending ioctl 80200204 to a partition! Feb 5 19:22:46 localhost vdsm vm.Vm WARNING vmId=`7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0`::_readPauseCode unsupported by libvirt vm Feb 5 19:22:47 localhost avahi-daemon[449]: Registering new address record for fe80 And in qemu.log for the VM: 2014-02-05 18:22:46.280+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name c6i -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid 7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-6,serial=421F4B4A-9A4C-A7C4-54E5-847BF1ADE1A5,uuid=7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6i.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-02-05T18:22:45,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/rhev/data-center/mnt/blockSD/f741671e-6480-4d7b-b357-8cf6e8d2c0f1/images/0912658d-1541-4d56-945b-112b0b074d29/67aaf7db-4d1c-42bd-a1b0-988d95c5d5d2,if=none,id=drive-virtio-disk0,format=qcow2,serial=0912658d-1541-4d56-945b-112b0b074d29,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:17,bus=pci.0,addr=0x3,bootindex=3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=33554432 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 KVM: unknown exit, hardware reason 3 EAX=00000094 EBX=00006e44 ECX=0000000e EDX=00000511 ESI=00000002 EDI=00006df8 EBP=00006e08 ESP=00006dd4 EIP=3ffe1464 EFL=00000017 [----APC] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] CS =0008 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA] SS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] DS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] FS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] GS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy GDT= 000fd3a8 00000037 IDT= 000fd3e6 00000000 CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000 DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 DR6=00000000ffff0ff0 DR7=0000000000000400 EFER=0000000000000000 Code=eb be 83 c4 08 5b 5e 5f 5d c3 89 c1 ba 11 05 00 00 eb 01 ec <49> 83 f9 ff 75 f9 c3 57 56 53 89 c3 8b b0 84 00 00 00 39 ce 77 1e 89 d7 0f b7 80 8c 00 00 main_channel_link: add main channel client main_channel_handle_parsed: net test: latency 1.710000 ms, bitrate 54169862 bps (51.660406 Mbps) red_dispatcher_set_cursor_peer: inputs_connect: inputs channel client create red_channel_client_disconnect: rcc=0x7f3179129010 (channel=0x7f312021f360 type=2 id=0) red_channel_client_disconnect: rcc=0x7f312026c5f0 (channel=0x7f312021f920 type=4 id=0) red_channel_client_disconnect: rcc=0x7f318e6b6220 (channel=0x7f318e4150d0 type=3 id=0) red_channel_client_disconnect: rcc=0x7f318e687340 (channel=0x7f318e409ef0 type=1 id=0) main_channel_client_on_disconnect: rcc=0x7f318e687340 red_client_destroy: destroy client 0x7f318e68d110 with #channels=4 red_dispatcher_disconnect_cursor_peer: red_dispatcher_disconnect_display_peer: vdsm and superdvdsm log here: https://drive.google.com/file/d/0BwoPbcrMv8mvSEdPdTZVaTZVc1E/edit?usp=sharin... https://drive.google.com/file/d/0BwoPbcrMv8mvMUxBaDgwSG1EY28/edit?usp=sharin... Gianluca

On 02/05/2014 08:37 PM, Gianluca Cecchi wrote:
I replicated the problem with same environment but attached to iSCSI storage domain. So the Gluster part is not involved. As soon as I run once a VM on the host the VM goes into paused state and in host messages:
Feb 5 19:22:45 localhost kernel: [16851.192234] cgroup: libvirtd (1460) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. Feb 5 19:22:45 localhost kernel: [16851.192240] cgroup: "memory" requires setting use_hierarchy to 1 on the root. Feb 5 19:22:46 localhost kernel: [16851.228204] device vnet0 entered promiscuous mode Feb 5 19:22:46 localhost kernel: [16851.236198] ovirtmgmt: port 2(vnet0) entered forwarding state Feb 5 19:22:46 localhost kernel: [16851.236208] ovirtmgmt: port 2(vnet0) entered forwarding state Feb 5 19:22:46 localhost kernel: [16851.591058] qemu-system-x86: sending ioctl 5326 to a partition! Feb 5 19:22:46 localhost kernel: [16851.591074] qemu-system-x86: sending ioctl 80200204 to a partition! Feb 5 19:22:46 localhost vdsm vm.Vm WARNING vmId=`7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0`::_readPauseCode unsupported by libvirt vm Feb 5 19:22:47 localhost avahi-daemon[449]: Registering new address record for fe80
And in qemu.log for the VM:
2014-02-05 18:22:46.280+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name c6i -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid 7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-6,serial=421F4B4A-9A4C-A7C4-54E5-847BF1ADE1A5,uuid=7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6i.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-02-05T18:22:45,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/rhev/data-center/mnt/blockSD/f741671e-6480-4d7b-b357-8cf6e8d2c0f1/images/0912658d-1541-4d56-945b-112b0b074d29/67aaf7db-4d1c-42bd-a1b0-988d95c5d5d2,if=none,id=drive-virtio-disk0,format=qcow2,serial=0912658d-1541-4d56-945b-112b0b074d29,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:17,bus=pci.0,addr=0x3,bootindex=3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=33554432 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 KVM: unknown exit, hardware reason 3 EAX=00000094 EBX=00006e44 ECX=0000000e EDX=00000511 ESI=00000002 EDI=00006df8 EBP=00006e08 ESP=00006dd4 EIP=3ffe1464 EFL=00000017 [----APC] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] CS =0008 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA] SS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] DS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] FS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] GS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA] LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy GDT= 000fd3a8 00000037 IDT= 000fd3e6 00000000 CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000 DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 DR6=00000000ffff0ff0 DR7=0000000000000400 EFER=0000000000000000 Code=eb be 83 c4 08 5b 5e 5f 5d c3 89 c1 ba 11 05 00 00 eb 01 ec <49> 83 f9 ff 75 f9 c3 57 56 53 89 c3 8b b0 84 00 00 00 39 ce 77 1e 89 d7 0f b7 80 8c 00 00 main_channel_link: add main channel client main_channel_handle_parsed: net test: latency 1.710000 ms, bitrate 54169862 bps (51.660406 Mbps) red_dispatcher_set_cursor_peer: inputs_connect: inputs channel client create red_channel_client_disconnect: rcc=0x7f3179129010 (channel=0x7f312021f360 type=2 id=0) red_channel_client_disconnect: rcc=0x7f312026c5f0 (channel=0x7f312021f920 type=4 id=0) red_channel_client_disconnect: rcc=0x7f318e6b6220 (channel=0x7f318e4150d0 type=3 id=0) red_channel_client_disconnect: rcc=0x7f318e687340 (channel=0x7f318e409ef0 type=1 id=0) main_channel_client_on_disconnect: rcc=0x7f318e687340 red_client_destroy: destroy client 0x7f318e68d110 with #channels=4 red_dispatcher_disconnect_cursor_peer: red_dispatcher_disconnect_display_peer:
vdsm and superdvdsm log here: https://drive.google.com/file/d/0BwoPbcrMv8mvSEdPdTZVaTZVc1E/edit?usp=sharin... https://drive.google.com/file/d/0BwoPbcrMv8mvMUxBaDgwSG1EY28/edit?usp=sharin...
Gianluca _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
is there a bug for tracking this?

On Mon, Feb 10, 2014 at 10:22 AM, Gianluca Cecchi wrote:
On Sun, Feb 9, 2014 at 11:13 PM, Itamar Heim wrote:
is there a bug for tracking this?
Not yet. What to bug against? spice or spice-protocol or qemu-kvm itself?
Gianluca
Actually it seems more complicated because also with vnc as display, I don't get the L2 VM in paused state as in spice, but if I run once and select CentOS 6.4 CD it starts and then it blocks at this screen (see link below), so it has to do with nested itself that is not so viable, at least with this cpu (Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz) and this L0 https://drive.google.com/file/d/0BwoPbcrMv8mvVHJybUw2dGFlTjg/edit?usp=sharin... At least with console set as vnc I can power off without problem my L2 VM and continue to work; instead with spice console as soon as I power off the paused L2 VM, the hypervisor (that is L1 vm) completely freezes with black console and I need to power off it. My problem is similar to this one: https://bugzilla.redhat.com/show_bug.cgi?id=922075 but in my case L0 is ESXi 5.1, 799733 so I think I have no much chance to trick it... Gianluca
participants (2)
-
Gianluca Cecchi
-
Itamar Heim