I replicated the problem with same environment but attached to iSCSI
storage domain.
So the Gluster part is not involved.
As soon as I run once a VM on the host the VM goes into paused state
and in host messages:
Feb 5 19:22:45 localhost kernel: [16851.192234] cgroup: libvirtd
(1460) created nested cgroup for controller "memory" which has
incomplete hierarchy support. Nested cgroups may change behavior in
the future.
Feb 5 19:22:45 localhost kernel: [16851.192240] cgroup: "memory"
requires setting use_hierarchy to 1 on the root.
Feb 5 19:22:46 localhost kernel: [16851.228204] device vnet0 entered
promiscuous mode
Feb 5 19:22:46 localhost kernel: [16851.236198] ovirtmgmt: port
2(vnet0) entered forwarding state
Feb 5 19:22:46 localhost kernel: [16851.236208] ovirtmgmt: port
2(vnet0) entered forwarding state
Feb 5 19:22:46 localhost kernel: [16851.591058] qemu-system-x86:
sending ioctl 5326 to a partition!
Feb 5 19:22:46 localhost kernel: [16851.591074] qemu-system-x86:
sending ioctl 80200204 to a partition!
Feb 5 19:22:46 localhost vdsm vm.Vm WARNING
vmId=`7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0`::_readPauseCode
unsupported by libvirt vm
Feb 5 19:22:47 localhost avahi-daemon[449]: Registering new address
record for fe80
And in qemu.log for the VM:
2014-02-05 18:22:46.280+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name c6i -S -machine
pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 1024 -smp
1,sockets=1,cores=1,threads=1 -uuid
7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-6,serial=421F4B4A-9A4C-A7C4-54E5-847BF1ADE1A5,uuid=7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6i.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-02-05T18:22:45,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-drive
file=/rhev/data-center/mnt/blockSD/f741671e-6480-4d7b-b357-8cf6e8d2c0f1/images/0912658d-1541-4d56-945b-112b0b074d29/67aaf7db-4d1c-42bd-a1b0-988d95c5d5d2,if=none,id=drive-virtio-disk0,format=qcow2,serial=0912658d-1541-4d56-945b-112b0b074d29,cache=none,werror=stop,rerror=stop,aio=native
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:17,bus=pci.0,addr=0x3,bootindex=3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global
qxl-vga.vram_size=33554432 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
KVM: unknown exit, hardware reason 3
EAX=00000094 EBX=00006e44 ECX=0000000e EDX=00000511
ESI=00000002 EDI=00006df8 EBP=00006e08 ESP=00006dd4
EIP=3ffe1464 EFL=00000017 [----APC] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA]
CS =0008 00000000 ffffffff 00c09b00 DPL=0 CS32 [-RA]
SS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA]
DS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA]
FS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA]
GS =0010 00000000 ffffffff 00c09300 DPL=0 DS [-WA]
LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT
TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy
GDT= 000fd3a8 00000037
IDT= 000fd3e6 00000000
CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000
DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000000
Code=eb be 83 c4 08 5b 5e 5f 5d c3 89 c1 ba 11 05 00 00 eb 01 ec <49>
83 f9 ff 75 f9 c3 57 56 53 89 c3 8b b0 84 00 00 00 39 ce 77 1e 89 d7
0f b7 80 8c 00 00
main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 1.710000 ms, bitrate
54169862 bps (51.660406 Mbps)
red_dispatcher_set_cursor_peer:
inputs_connect: inputs channel client create
red_channel_client_disconnect: rcc=0x7f3179129010
(channel=0x7f312021f360 type=2 id=0)
red_channel_client_disconnect: rcc=0x7f312026c5f0
(channel=0x7f312021f920 type=4 id=0)
red_channel_client_disconnect: rcc=0x7f318e6b6220
(channel=0x7f318e4150d0 type=3 id=0)
red_channel_client_disconnect: rcc=0x7f318e687340
(channel=0x7f318e409ef0 type=1 id=0)
main_channel_client_on_disconnect: rcc=0x7f318e687340
red_client_destroy: destroy client 0x7f318e68d110 with #channels=4
red_dispatcher_disconnect_cursor_peer:
red_dispatcher_disconnect_display_peer:
vdsm and superdvdsm log here:
https://drive.google.com/file/d/0BwoPbcrMv8mvSEdPdTZVaTZVc1E/edit?usp=sha...
https://drive.google.com/file/d/0BwoPbcrMv8mvMUxBaDgwSG1EY28/edit?usp=sha...
Gianluca
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users