[Users] Migration failed due to Error: novm

Hi guys, I'm doing an upgrade to 3.2.1 from ovirt 3.1 and the engine and host upgrades have gone through without too many problems, but I've encountered an error trying to migrate some of the VM's in order for me to upgrade the host they reside on. Some of the VM's migrated perfectly from the old host to the new one, but then when trying to move the remaining VM's I've received an error in my console... 2013-May-23, 16:23 Migration failed due to Error: novm (VM: dhcp, Source: node03.blablacollege.com, Destination: node02.blablacollege.com). 2013-May-23, 16:23 Migration started (VM: dhcp, Source: node03.blablacollege.com, Destination: node02.blablacollege.com, User: admin@internal). The VM actually seems to have migrated as it's now running on the new host as the kvm process and the VM is still working and responding as usual, however my engine reports the VM is down and the only option I have available is to click "Run" which I tried to do earlier which didn't help. I've now got 4 VM's showing as down and yet they are operating perfectly, any thoughts on what to do? I'm going to attempt a restart of one of the VM's as soon as it's after hours to see if it sorts it out, but this is a rather strange issue. Thanks. Regards. Neil Wilson.

Sorry in addition to the above below are more details... Engine 10.0.2.31 Centos 6.4 ovirt-host-deploy-1.1.0-0.0.master.el6.noarch ovirt-engine-sdk-3.2.0.9-1.el6.noarch ovirt-engine-webadmin-portal-3.2.1-1.41.el6.noarch ovirt-host-deploy-java-1.1.0-0.0.master.el6.noarch ovirt-engine-dbscripts-3.2.1-1.41.el6.noarch ovirt-engine-setup-3.2.1-1.41.el6.noarch ovirt-engine-cli-3.2.0.10-1.el6.noarch ovirt-engine-genericapi-3.2.1-1.41.el6.noarch ovirt-iso-uploader-3.1.0-16.el6.noarch ovirt-engine-restapi-3.2.1-1.41.el6.noarch ovirt-image-uploader-3.1.0-16.el6.noarch ovirt-engine-backend-3.2.1-1.41.el6.noarch ovirt-engine-tools-3.2.1-1.41.el6.noarch ovirt-engine-userportal-3.2.1-1.41.el6.noarch ovirt-engine-jbossas711-1-0.x86_64 ovirt-engine-3.2.1-1.41.el6.noarch ovirt-log-collector-3.1.0-16.el6.noarch Host 10.0.2.2 Centos 6.4 I keep seeing the following error over and over in /var/log/messages on the newly upgraded host(10.0.2.2)... May 23 17:16:09 node02 vdsm vds ERROR unexpected error#012Traceback (most recent call last):#012 File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper#012 res = f(*args, **kwargs)#012 File "/usr/share/vdsm/BindingXMLRPC.py", line 301, in vmGetStats#012 return vm.getStats()#012 File "/usr/share/vdsm/API.py", line 340, in getStats#012 stats = v.getStats().copy()#012 File "/usr/share/vdsm/libvirtvm.py", line 2653, in getStats#012 stats = vm.Vm.getStats(self)#012 File "/usr/share/vdsm/vm.py", line 1177, in getStats#012 stats['balloonInfo'] = self._getBalloonInfo()#012 File "/usr/share/vdsm/libvirtvm.py", line 2660, in _getBalloonInfo#012 dev['specParams']['model'] != 'none':#012KeyError: 'specParams' Attached are my engine.log and vdsm.log, as you can see, something definitely doesn't look right. During my upgrade I also upgraded my Celerity HBA FC 8Gb drivers due to the Linux kernel being upgraded when I applied the Centos 6.4 upgrade. On a side note, I rebooted one of the VM's that were showing as down, and the VM went off, I then had to click "Run" and the VM started and appears to be fine and is now showing as running in oVirt too. Any help is greatly appreciated. Thank you. Regards. Neil Wilson. On Thu, May 23, 2013 at 4:46 PM, Neil <nwilson123@gmail.com> wrote:
Hi guys,
I'm doing an upgrade to 3.2.1 from ovirt 3.1 and the engine and host upgrades have gone through without too many problems, but I've encountered an error trying to migrate some of the VM's in order for me to upgrade the host they reside on.
Some of the VM's migrated perfectly from the old host to the new one, but then when trying to move the remaining VM's I've received an error in my console...
2013-May-23, 16:23 Migration failed due to Error: novm (VM: dhcp, Source: node03.blablacollege.com, Destination: node02.blablacollege.com). 2013-May-23, 16:23 Migration started (VM: dhcp, Source: node03.blablacollege.com, Destination: node02.blablacollege.com, User: admin@internal).
The VM actually seems to have migrated as it's now running on the new host as the kvm process and the VM is still working and responding as usual, however my engine reports the VM is down and the only option I have available is to click "Run" which I tried to do earlier which didn't help.
I've now got 4 VM's showing as down and yet they are operating perfectly, any thoughts on what to do?
I'm going to attempt a restart of one of the VM's as soon as it's after hours to see if it sorts it out, but this is a rather strange issue.
Thanks.
Regards.
Neil Wilson.

Il giorno 23/mag/2013 17:32, "Neil" <nwilson123@gmail.com> ha scritto:
Sorry in addition to the above below are more details...
Engine 10.0.2.31 Centos 6.4
ovirt-host-deploy-1.1.0-0.0.master.el6.noarch ovirt-engine-sdk-3.2.0.9-1.el6.noarch ovirt-engine-webadmin-portal-3.2.1-1.41.el6.noarch ovirt-host-deploy-java-1.1.0-0.0.master.el6.noarch ovirt-engine-dbscripts-3.2.1-1.41.el6.noarch ovirt-engine-setup-3.2.1-1.41.el6.noarch ovirt-engine-cli-3.2.0.10-1.el6.noarch ovirt-engine-genericapi-3.2.1-1.41.el6.noarch ovirt-iso-uploader-3.1.0-16.el6.noarch ovirt-engine-restapi-3.2.1-1.41.el6.noarch ovirt-image-uploader-3.1.0-16.el6.noarch ovirt-engine-backend-3.2.1-1.41.el6.noarch ovirt-engine-tools-3.2.1-1.41.el6.noarch ovirt-engine-userportal-3.2.1-1.41.el6.noarch ovirt-engine-jbossas711-1-0.x86_64 ovirt-engine-3.2.1-1.41.el6.noarch ovirt-log-collector-3.1.0-16.el6.noarch
Host 10.0.2.2 Centos 6.4
I keep seeing the following error over and over in /var/log/messages on the newly upgraded host(10.0.2.2)...
May 23 17:16:09 node02 vdsm vds ERROR unexpected error#012Traceback (most recent call last):#012 File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper#012 res = f(*args, **kwargs)#012 File "/usr/share/vdsm/BindingXMLRPC.py", line 301, in vmGetStats#012 return vm.getStats()#012 File "/usr/share/vdsm/API.py", line 340, in getStats#012 stats = v.getStats().copy()#012 File "/usr/share/vdsm/libvirtvm.py", line 2653, in getStats#012 stats = vm.Vm.getStats(self)#012 File "/usr/share/vdsm/vm.py", line 1177, in getStats#012 stats['balloonInfo'] = self._getBalloonInfo()#012 File "/usr/share/vdsm/libvirtvm.py", line 2660, in _getBalloonInfo#012 dev['specParams']['model'] != 'none':#012KeyError: 'specParams'
Attached are my engine.log and vdsm.log, as you can see, something definitely doesn't look right.
During my upgrade I also upgraded my Celerity HBA FC 8Gb drivers due to the Linux kernel being upgraded when I applied the Centos 6.4 upgrade.
On a side note, I rebooted one of the VM's that were showing as down, and the VM went off, I then had to click "Run" and the VM started and appears to be fine and is now showing as running in oVirt too.
Any help is greatly appreciated.
Thank you.
Regards.
Neil Wilson.
On Thu, May 23, 2013 at 4:46 PM, Neil <nwilson123@gmail.com> wrote:
Hi guys,
I'm doing an upgrade to 3.2.1 from ovirt 3.1 and the engine and host upgrades have gone through without too many problems, but I've encountered an error trying to migrate some of the VM's in order for me to upgrade the host they reside on.
Some of the VM's migrated perfectly from the old host to the new one, but then when trying to move the remaining VM's I've received an error in my console...
2013-May-23, 16:23 Migration failed due to Error: novm (VM: dhcp, Source: node03.blablacollege.com, Destination: node02.blablacollege.com). 2013-May-23, 16:23 Migration started (VM: dhcp, Source: node03.blablacollege.com, Destination: node02.blablacollege.com, User: admin@internal).
The VM actually seems to have migrated as it's now running on the new host as the kvm process and the VM is still working and responding as usual,
If I remember correctly, there was a similar situation where restarting vdsmd on the host solved the problem. In general or doesn't impact running vms this operation.

Thanks for the reply Gianluca, Sorry, just to confirm, restarting vdsmd won't impact on my VM's, so it can be done without causing any problems to the running VM's ? Thank you. Regards. Neil Wilson. On Thu, May 23, 2013 at 5:59 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Il giorno 23/mag/2013 17:32, "Neil" <nwilson123@gmail.com> ha scritto:
Sorry in addition to the above below are more details...
Engine 10.0.2.31 Centos 6.4
ovirt-host-deploy-1.1.0-0.0.master.el6.noarch ovirt-engine-sdk-3.2.0.9-1.el6.noarch ovirt-engine-webadmin-portal-3.2.1-1.41.el6.noarch ovirt-host-deploy-java-1.1.0-0.0.master.el6.noarch ovirt-engine-dbscripts-3.2.1-1.41.el6.noarch ovirt-engine-setup-3.2.1-1.41.el6.noarch ovirt-engine-cli-3.2.0.10-1.el6.noarch ovirt-engine-genericapi-3.2.1-1.41.el6.noarch ovirt-iso-uploader-3.1.0-16.el6.noarch ovirt-engine-restapi-3.2.1-1.41.el6.noarch ovirt-image-uploader-3.1.0-16.el6.noarch ovirt-engine-backend-3.2.1-1.41.el6.noarch ovirt-engine-tools-3.2.1-1.41.el6.noarch ovirt-engine-userportal-3.2.1-1.41.el6.noarch ovirt-engine-jbossas711-1-0.x86_64 ovirt-engine-3.2.1-1.41.el6.noarch ovirt-log-collector-3.1.0-16.el6.noarch
Host 10.0.2.2 Centos 6.4
I keep seeing the following error over and over in /var/log/messages on the newly upgraded host(10.0.2.2)...
May 23 17:16:09 node02 vdsm vds ERROR unexpected error#012Traceback (most recent call last):#012 File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper#012 res = f(*args, **kwargs)#012 File "/usr/share/vdsm/BindingXMLRPC.py", line 301, in vmGetStats#012 return vm.getStats()#012 File "/usr/share/vdsm/API.py", line 340, in getStats#012 stats = v.getStats().copy()#012 File "/usr/share/vdsm/libvirtvm.py", line 2653, in getStats#012 stats = vm.Vm.getStats(self)#012 File "/usr/share/vdsm/vm.py", line 1177, in getStats#012 stats['balloonInfo'] = self._getBalloonInfo()#012 File "/usr/share/vdsm/libvirtvm.py", line 2660, in _getBalloonInfo#012 dev['specParams']['model'] != 'none':#012KeyError: 'specParams'
Attached are my engine.log and vdsm.log, as you can see, something definitely doesn't look right.
During my upgrade I also upgraded my Celerity HBA FC 8Gb drivers due to the Linux kernel being upgraded when I applied the Centos 6.4 upgrade.
On a side note, I rebooted one of the VM's that were showing as down, and the VM went off, I then had to click "Run" and the VM started and appears to be fine and is now showing as running in oVirt too.
Any help is greatly appreciated.
Thank you.
Regards.
Neil Wilson.
On Thu, May 23, 2013 at 4:46 PM, Neil <nwilson123@gmail.com> wrote:
Hi guys,
I'm doing an upgrade to 3.2.1 from ovirt 3.1 and the engine and host upgrades have gone through without too many problems, but I've encountered an error trying to migrate some of the VM's in order for me to upgrade the host they reside on.
Some of the VM's migrated perfectly from the old host to the new one, but then when trying to move the remaining VM's I've received an error in my console...
2013-May-23, 16:23 Migration failed due to Error: novm (VM: dhcp, Source: node03.blablacollege.com, Destination: node02.blablacollege.com). 2013-May-23, 16:23 Migration started (VM: dhcp, Source: node03.blablacollege.com, Destination: node02.blablacollege.com, User: admin@internal).
The VM actually seems to have migrated as it's now running on the new host as the kvm process and the VM is still working and responding as usual,
If I remember correctly, there was a similar situation where restarting vdsmd on the host solved the problem. In general or doesn't impact running vms this operation.

On Fri, May 24, 2013 at 8:08 AM, Neil wrote:
Thanks for the reply Gianluca,
Sorry, just to confirm, restarting vdsmd won't impact on my VM's, so it can be done without causing any problems to the running VM's ?
Thank you.
Regards.
Yes, I read many times here and I tried myself in the past.

On 24-5-2013 8:40, Gianluca Cecchi wrote:
On Fri, May 24, 2013 at 8:08 AM, Neil wrote:
Thanks for the reply Gianluca,
Sorry, just to confirm, restarting vdsmd won't impact on my VM's, so it can be done without causing any problems to the running VM's ?
Thank you.
Regards. Yes, I read many times here and I tried myself in the past.
I have had reboots of my hosts if vdsm is too long down because of fencing will kick in and reboot the host. Joop

Hi guys, Thanks for the info. I've restarted vdsmd on both the problematic hosts and it doesn't seem to have resolved the VM's showing as off, and I'm still getting the same errors showing in the logs. Something of more concern is I've just noticed that I appear to have my Zimbra VM running on two separate hosts... On host 10.0.2.22 15407 ? Sl 223:35 /usr/libexec/qemu-kvm -name zimbra -S -M rhel6.4.0 -cpu Westmere -enable-kvm -m 8192 -smp 4,sockets=1,cores=4,threads=1 -uuid 179c293b-e6a3-4ec6-a54c-2f92f875bc5e -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=6-4.el6.centos.10,serial=4C4C4544-0038-5310-8050-C4C04F34354A,uuid=179c293b-e6a3-4ec6-a54c-2f92f875bc5e -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-05-23T15:07:39,driftfix=slew -no-shutdown -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/446921d9-cbd1-42b1-919f-88d6ae310fd9/2ff8ba31-7397-41e7-8a60-7ef9eec23d1a,if=none,id=drive-virtio-disk0,format=raw,serial=446921d9-cbd1-42b1-919f-88d6ae310fd9,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:7a:01,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/zimbra.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/zimbra.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 0:10,password -k en-us -vga cirrus On Host 10.0.2.21 17594 ? Sl 449:39 /usr/libexec/qemu-kvm -name zimbra -S -M rhel6.2.0 -cpu Westmere -enable-kvm -m 8192 -smp 4,sockets=1,cores=4,threads=1 -uuid 179c293b-e6a3-4ec6-a54c-2f92f875bc5e -smbios type=1,manufacturer=Red Hat,product=RHEV Hypervisor,version=6-2.el6.centos.7,serial=4C4C4544-0038-5310-8050-C4C04F34354A_BC:30:5B:E4:19:C2,uuid=179c293b-e6a3-4ec6-a54c-2f92f875bc5e -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-05-23T10:19:47,driftfix=slew -no-shutdown -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/446921d9-cbd1-42b1-919f-88d6ae310fd9/2ff8ba31-7397-41e7-8a60-7ef9eec23d1a,if=none,id=drive-virtio-disk0,format=raw,serial=446921d9-cbd1-42b1-919f-88d6ae310fd9,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:a8:7a:01,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/zimbra.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -device usb-tablet,id=input0 -vnc 0:2,password -k en-us -vga cirrus -incoming tcp:[::]:49153 To me this sounds very concerning, the Zimbra server does appear to be okay though, so not sure which VM is actually working... Thanks. Regards. Neil Wilson. On Fri, May 24, 2013 at 9:08 AM, noc <noc@nieuwland.nl> wrote:
On 24-5-2013 8:40, Gianluca Cecchi wrote:
On Fri, May 24, 2013 at 8:08 AM, Neil wrote:
Thanks for the reply Gianluca,
Sorry, just to confirm, restarting vdsmd won't impact on my VM's, so it can be done without causing any problems to the running VM's ?
Thank you.
Regards.
Yes, I read many times here and I tried myself in the past.
I have had reboots of my hosts if vdsm is too long down because of fencing will kick in and reboot the host.
Joop
participants (3)
-
Gianluca Cecchi
-
Neil
-
noc