[Users] Problems with migration on a VM and not detected by gui

Gianluca Cecchi gianluca.cecchi at gmail.com
Tue Feb 4 13:21:09 UTC 2014


Hello,
passing from 3.3.3rc to 3.3.3 final on fedora 19 based infra.
Two hosts and one engine.
Gluster DC.

I have 3 VMs: CentOS 5.10, 6.5, Fedora 20

Main steps:

1) update engine with usual procedure
2) all VMs are on one node; I put into maintenance the other one and
update it and reboot
3) activate the new node and migrate all VMs to it.

>From webadmin gui point of view it seems all ok.
Only "strange" thing is that the CentOS 6.5 VM has no ip shown, when
usually it has becuse of ovrt-guest-agent installed on it

So I try to connect to its console (configured as VNC).
But I get error (the other two are ok and they are spice)
Also, I cannot ping or ssh into the VM so there is indeed some problem.

I didn't connect since 30th January so I don't knw if any probem
arised before today.

>From the original host
/var/log/libvirt/qemu/c6s.log

I see:
2014-01-30 11:21:37.561+0000: shutting down
2014-01-30 11:22:14.595+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name c6s -S -machine
pc-1.0,accel=kvm,usb=off -cpu Opteron_G2 -m 1024 -smp
1,sockets=1,cores=1,threads=1 -uuid
4147e0d3-19a7-447b-9d88-2ff19365bec0 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-5,serial=34353439-3036-435A-4A38-303330393338,uuid=4147e0d3-19a7-447b-9d88-2ff19365bec0
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6s.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-01-23T11:42:26,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/a5e4f67b-50b5-4740-9990-39deb8812445/53408cb0-bcd4-40de-bc69-89d59b7b5bc2,if=none,id=drive-virtio-disk0,format=raw,serial=a5e4f67b-50b5-4740-9990-39deb8812445,cache=none,werror=stop,rerror=stop,aio=threads
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/c1477133-6b06-480d-a233-1dae08daf8b3/c2a82c64-9dee-42bb-acf2-65b8081f2edf,if=none,id=drive-scsi0-0-0-0,format=raw,serial=c1477133-6b06-480d-a233-1dae08daf8b3,cache=none,werror=stop,rerror=stop,aio=threads
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
-netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:8f:04:f8,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4147e0d3-19a7-447b-9d88-2ff19365bec0.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4147e0d3-19a7-447b-9d88-2ff19365bec0.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -device
usb-tablet,id=input0 -vnc 0:0,password -k en-us -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
char device redirected to /dev/pts/0 (label charconsole0)
2014-02-04 12:48:01.855+0000: shutting down
qemu: terminating on signal 15 from pid 1021

>From the updated host where I apparently migrated it I see:

2014-02-04 12:47:54.674+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name c6s -S -machine
pc-1.0,accel=kvm,usb=off -cpu Opteron_G2 -m 1024 -smp
1,sockets=1,cores=1,threads=1 -uuid
4147e0d3-19a7-447b-9d88-2ff19365bec0 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-5,serial=34353439-3036-435A-4A38-303330393338,uuid=4147e0d3-19a7-447b-9d88-2ff19365bec0
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6s.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-01-28T13:08:06,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/a5e4f67b-50b5-4740-9990-39deb8812445/53408cb0-bcd4-40de-bc69-89d59b7b5bc2,if=none,id=drive-virtio-disk0,format=raw,serial=a5e4f67b-50b5-4740-9990-39deb8812445,cache=none,werror=stop,rerror=stop,aio=threads
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/c1477133-6b06-480d-a233-1dae08daf8b3/c2a82c64-9dee-42bb-acf2-65b8081f2edf,if=none,id=drive-scsi0-0-0-0,format=raw,serial=c1477133-6b06-480d-a233-1dae08daf8b3,cache=none,werror=stop,rerror=stop,aio=threads
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
-netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:8f:04:f8,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4147e0d3-19a7-447b-9d88-2ff19365bec0.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4147e0d3-19a7-447b-9d88-2ff19365bec0.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -device
usb-tablet,id=input0 -vnc 0:0,password -k en-us -vga cirrus -incoming
tcp:[::]:51152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
char device redirected to /dev/pts/1 (label charconsole0)

engine log
https://drive.google.com/file/d/0BwoPbcrMv8mvZWpqOHNqc0dnenc/edit?usp=sharing

source vdsm log:
https://drive.google.com/file/d/0BwoPbcrMv8mvYlluMDh1Y19jdEU/edit?usp=sharing

dest vdsm log
https://drive.google.com/file/d/0BwoPbcrMv8mvT1JxcmdKWlloOFU/edit?usp=sharing


First error I see in source host log:
Thread-728830::ERROR::2014-02-04
13:42:59,735::BindingXMLRPC::984::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 970, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper
    rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 206, in volumeStatus
    statusOption)
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
    **kwargs)
  File "<string>", line 2, in glusterVolumeStatus
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
    raise convert_to_error(kind, result)
KeyError: 'path'
Thread-728831::ERROR::2014-02-04
13:42:59,805::BindingXMLRPC::984::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 970, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 53, in wrapper
    rv = func(*args, **kwargs)
  File "/usr/share/vdsm/gluster/api.py", line 206, in volumeStatus
    statusOption)
  File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
    return callMethod()
  File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
    **kwargs)
  File "<string>", line 2, in glusterVolumeStatus
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
in _callmethod
    raise convert_to_error(kind, result)
KeyError: 'path'
Thread-323::INFO::2014-02-04
13:43:05,765::logUtils::44::dispatcher::(wrapper) Run and protect:
getVolumeSize(sdUUID='d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291',
spUUID='eb679feb-4da2-4fd0-a185-abbe459ffa70',
imgUUID='a3d332c0-c302-4f28-9ed3-e2e83566343f',
volUUID='701eca86-df87-4b16-ac6d-e9f51e7ac171', options=None)

Apart the problem itself, another one is in my opinion the engine that
doesn't know about it at all....

For this VM, in its event tab I can see only:
2014-Feb-04, 13:57
user admin at internal initiated console session for VM c6s
1b78630c
oVirt
2014-Feb-04, 13:51
user admin at internal initiated console session for VM c6s
1d77f16a
oVirt
2014-Feb-04, 13:48
Migration completed (VM: c6s, Source: f18ovn03, Destination: f18ovn01,
Duration: 8 sec).
17c547cc
oVirt
2014-Feb-04, 13:47
Migration started (VM: c6s, Source: f18ovn03, Destination: f18ovn01,
User: admin at internal).
17c547cc
oVirt
2014-Jan-30, 12:30
user admin at internal initiated console session for VM c6s
5536edb8
oVirt
2014-Jan-30, 12:23
VM c6s started on Host f18ovn03
45209312
oVirt
2014-Jan-30, 12:22
user admin at internal initiated console session for VM c6s
19c766c8
oVirt
2014-Jan-30, 12:22
user admin at internal initiated console session for VM c6s
79815897
oVirt
2014-Jan-30, 12:22
VM c6s was started by admin at internal (Host: f18ovn03).
45209312
oVirt
2014-Jan-30, 12:22
VM c6s configuration was updated by admin at internal.
76cbc53
oVirt
2014-Jan-30, 12:21
VM c6s is down. Exit message: User shut down
oVirt
2014-Jan-30, 12:20
VM shutdown initiated by admin at internal on VM c6s (Host: f18ovn03).
213c3a55
oVirt

Gianluca



More information about the Users mailing list