[Users] unable to start vm in 3.3 and f19 with gluster

Itamar Heim iheim at redhat.com
Wed Oct 2 19:16:44 UTC 2013


On 10/02/2013 12:57 AM, Gianluca Cecchi wrote:
> Today I was able to work again on this matter and it seems related to spice
> Every time I start the VM (that is defined with spice) it goes in

and this doesn't happen if the VM is defined with vnc?

> paused state and after a few minutes the node gets unreachable, down
> form the gui, no response also inside its console and forced to power
> off it.
>
> So I tried to create a vm taken from oVirt generated xml using gluster
> to see if it is the problem in any way...
> and start it via virsh and libvirt on the node.
>
> See in attach:
> I got the ovirt generated xml file, under /var/run/libvirt/qemu/C6.xml
> myvm.xml the derived vm that defined and started and is able to go and
> remain running
>
> [root at ovnode02 ~]# virsh define myvm.xml
> Please enter your authentication name: vuser
> Please enter your password:
> Domain myvm defined from myvm.xml
>
> [root at ovnode02 ~]# virsh list --all
> Please enter your authentication name: vuser
> Please enter your password:
>   Id    Name                           State
> ----------------------------------------------------
>   -     myvm                           shut off
>
> [root at ovnode02 ~]# virsh start myvm
> Please enter your authentication name: vuser
> Please enter your password:
> Domain myvm started
>
> [root at ovnode02 ~]# virsh list --all
> Please enter your authentication name: vuser
> Please enter your password:
>   Id    Name                           State
> ----------------------------------------------------
>   2     myvm                           running
>
> In this case
>
> [root at ovnode02 ~]# ps -ef|grep qemu
> qemu      1617   574  0 18:24 ?        00:00:00 [python] <defunct>
> qemu      6083     1  7 18:40 ?        00:00:14
> /usr/bin/qemu-system-x86_64 -machine accel=kvm -name myvm -S -machine
> pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp
> 1,sockets=1,cores=1,threads=1 -uuid
> dfadc661-6288-4f21-8faa-012daf29478f -nographic -no-user-config
> -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/myvm.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2013-10-01T16:40:55,driftfix=slew -no-shutdown -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
> file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,cache=none
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3
> root      7137  7083  0 18:44 pts/2    00:00:00 grep --color=auto qemu
>
> If I undefine and redefine myvm adding the spice parts in it, I get
> the same bad behaviour as in oVirt
>
> [root at ovnode02 ~]# virsh list --all
> Please enter your authentication name: vuser
> Please enter your password:
>   Id    Name                           State
> ----------------------------------------------------
>   3     myvm                           paused
>
> A confirmation of spice involvement I get if I run once C6 from oVirt
> using vnc instead of spice.
> Now the VM goes running and I can access its console from vnc and the
> qemu commandline is:
>
> qemu     10786     1  9 21:05 ?        00:00:14
> /usr/bin/qemu-system-x86_64 -machine accel=kvm -name C6 -S -machine
> pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp
> 1,sockets=1,cores=1,threads=1 -uuid
> 409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=19-3,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2013-10-01T19:05:47,driftfix=slew -no-shutdown -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
> file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/11111111-1111-1111-1111-111111111111/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
> -drive file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
> -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3,bootindex=3
> -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -device usb-tablet,id=input0 -vnc 0:0,password -k en-us -vga cirrus
> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
>
>
> Any info about how to debug?
> Anyone got the former logs and see any information?
>
> Take in mind that the environment is nested virtualization inside ESXi
> 5.1, so it can take a part too in the problem.
> Anyone using spice over glusterfs on f19 without any problem?
>
> On nodes:
> [root at ovnode01 ~]# rpm -qa|grep vdsm
> vdsm-python-4.12.1-2.fc19.x86_64
> vdsm-python-cpopen-4.12.1-2.fc19.x86_64
> vdsm-gluster-4.12.1-2.fc19.noarch
> vdsm-cli-4.12.1-2.fc19.noarch
> vdsm-4.12.1-2.fc19.x86_64
> vdsm-xmlrpc-4.12.1-2.fc19.noarch
>
> On engine
> [root at ovirt ~]# rpm -qa|grep ovirt
> ovirt-engine-restapi-3.3.0-4.fc19.noarch
> ovirt-engine-sdk-python-3.3.0.6-1.fc19.noarch
> ovirt-log-collector-3.3.0-1.fc19.noarch
> ovirt-engine-lib-3.3.0-4.fc19.noarch
> ovirt-engine-3.3.0-4.fc19.noarch
> ovirt-release-fedora-8-1.noarch
> ovirt-iso-uploader-3.3.0-1.fc19.noarch
> ovirt-engine-cli-3.3.0.4-1.fc19.noarch
> ovirt-engine-setup-3.3.0-4.fc19.noarch
> ovirt-engine-dbscripts-3.3.0-4.fc19.noarch
> ovirt-host-deploy-java-1.1.1-1.fc19.noarch
> ovirt-image-uploader-3.3.0-1.fc19.noarch
> ovirt-host-deploy-1.1.1-1.fc19.noarch
> ovirt-engine-webadmin-portal-3.3.0-4.fc19.noarch
> ovirt-engine-backend-3.3.0-4.fc19.noarch
> ovirt-engine-userportal-3.3.0-4.fc19.noarch
> ovirt-engine-tools-3.3.0-4.fc19.noarch
>
> Thanks in advance,
> Gianluca
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>




More information about the Users mailing list