
Thanks for the reply Gianluca, Sorry, just to confirm, restarting vdsmd won't impact on my VM's, so it can be done without causing any problems to the running VM's ? Thank you. Regards. Neil Wilson. On Thu, May 23, 2013 at 5:59 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Il giorno 23/mag/2013 17:32, "Neil" <nwilson123@gmail.com> ha scritto:
Sorry in addition to the above below are more details...
Engine 10.0.2.31 Centos 6.4
ovirt-host-deploy-1.1.0-0.0.master.el6.noarch ovirt-engine-sdk-3.2.0.9-1.el6.noarch ovirt-engine-webadmin-portal-3.2.1-1.41.el6.noarch ovirt-host-deploy-java-1.1.0-0.0.master.el6.noarch ovirt-engine-dbscripts-3.2.1-1.41.el6.noarch ovirt-engine-setup-3.2.1-1.41.el6.noarch ovirt-engine-cli-3.2.0.10-1.el6.noarch ovirt-engine-genericapi-3.2.1-1.41.el6.noarch ovirt-iso-uploader-3.1.0-16.el6.noarch ovirt-engine-restapi-3.2.1-1.41.el6.noarch ovirt-image-uploader-3.1.0-16.el6.noarch ovirt-engine-backend-3.2.1-1.41.el6.noarch ovirt-engine-tools-3.2.1-1.41.el6.noarch ovirt-engine-userportal-3.2.1-1.41.el6.noarch ovirt-engine-jbossas711-1-0.x86_64 ovirt-engine-3.2.1-1.41.el6.noarch ovirt-log-collector-3.1.0-16.el6.noarch
Host 10.0.2.2 Centos 6.4
I keep seeing the following error over and over in /var/log/messages on the newly upgraded host(10.0.2.2)...
May 23 17:16:09 node02 vdsm vds ERROR unexpected error#012Traceback (most recent call last):#012 File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper#012 res = f(*args, **kwargs)#012 File "/usr/share/vdsm/BindingXMLRPC.py", line 301, in vmGetStats#012 return vm.getStats()#012 File "/usr/share/vdsm/API.py", line 340, in getStats#012 stats = v.getStats().copy()#012 File "/usr/share/vdsm/libvirtvm.py", line 2653, in getStats#012 stats = vm.Vm.getStats(self)#012 File "/usr/share/vdsm/vm.py", line 1177, in getStats#012 stats['balloonInfo'] = self._getBalloonInfo()#012 File "/usr/share/vdsm/libvirtvm.py", line 2660, in _getBalloonInfo#012 dev['specParams']['model'] != 'none':#012KeyError: 'specParams'
Attached are my engine.log and vdsm.log, as you can see, something definitely doesn't look right.
During my upgrade I also upgraded my Celerity HBA FC 8Gb drivers due to the Linux kernel being upgraded when I applied the Centos 6.4 upgrade.
On a side note, I rebooted one of the VM's that were showing as down, and the VM went off, I then had to click "Run" and the VM started and appears to be fine and is now showing as running in oVirt too.
Any help is greatly appreciated.
Thank you.
Regards.
Neil Wilson.
On Thu, May 23, 2013 at 4:46 PM, Neil <nwilson123@gmail.com> wrote:
Hi guys,
I'm doing an upgrade to 3.2.1 from ovirt 3.1 and the engine and host upgrades have gone through without too many problems, but I've encountered an error trying to migrate some of the VM's in order for me to upgrade the host they reside on.
Some of the VM's migrated perfectly from the old host to the new one, but then when trying to move the remaining VM's I've received an error in my console...
2013-May-23, 16:23 Migration failed due to Error: novm (VM: dhcp, Source: node03.blablacollege.com, Destination: node02.blablacollege.com). 2013-May-23, 16:23 Migration started (VM: dhcp, Source: node03.blablacollege.com, Destination: node02.blablacollege.com, User: admin@internal).
The VM actually seems to have migrated as it's now running on the new host as the kvm process and the VM is still working and responding as usual,
If I remember correctly, there was a similar situation where restarting vdsmd on the host solved the problem. In general or doesn't impact running vms this operation.