[ovirt-users] Urgent: not comply with the cluster Default emulated machines
Roy Golan
rgolan at redhat.com
Sun Aug 10 09:26:08 EDT 2014
On 08/10/2014 03:56 PM, Itamar Heim wrote:
> On 08/07/2014 09:49 AM, Neil wrote:
>> Hi Roy,
>>
>> Thank you very much for replying so quickly. I think I've managed to
>> work out what is causing it.
>>
>> During the updates it looks like one of my hosts ended up with
>> qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64, which is the one that was
>> working, and the other two hosts ended up with
>> qemu-kvm-rhev-0.12.1.2-2.355.el6.3.x86_64
>>
>> I've since removed 2.355 and re-installed node03 with 2.415 and it's
>> now operational again.
>>
>> Thank you very much for your assistance.
>>
>
> 3.4 assumed .el6.5, i recommend updating all packages to it, not just
> qemu-kvm.
>
I might be explaining the trivial but anyhow - vdsm's rpm version is
distinct from the ovirt's (as opposed to ovirt-engine) and can be
checked by
[root ~]# vdsClient -s 0 getVdsCaps | grep -i supportedEngine
supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5']
>> Greatly appreciated.
>>
>> Kind regards.
>>
>> Neil Wilson.
>>
>>
>>
>> On Thu, Aug 7, 2014 at 8:27 AM, Roy Golan <rgolan at redhat.com> wrote:
>>> On 08/07/2014 09:10 AM, Neil wrote:
>>>
>>> let see what qemu output as its supported emulated machines
>>>
>>> on your non-operational host:
>>>
>>> /usr/libexec/qemu-kvm -M ?
>>>
>>> if rhel6.4 is there than vdsm still caches old values probably
>>>
>>> verify with
>>>
>>> vdsClient 0 -s getVdsCaps | grep rhel
>>>
>>>
>>>
>>>
>>>> Hi guys,
>>>>
>>>> Please could someone assist urgently, 2 of my 3 hosts are non
>>>> operational and some VM's won't start because I don't have resources
>>>> to run them all on one host.
>>>>
>>>> I upgraded to 3.4 from 3.3 yesterday and everything seemed fine, then
>>>> woke up this morning to this problem...
>>>>
>>>> host node03 does not comply with the cluster Default emulated
>>>> machines. The Hosts emulated machines are rhel6.4.0,pc
>>>>
>>>>
>>>> Hosts CentOS release 6.5 (Final)
>>>> vdsm-python-4.14.11.2-0.el6.x86_64
>>>> vdsm-cli-4.14.11.2-0.el6.noarch
>>>> vdsm-python-zombiereaper-4.14.11.2-0.el6.noarch
>>>> vdsm-xmlrpc-4.14.11.2-0.el6.noarch
>>>> vdsm-4.14.11.2-0.el6.x86_64
>>>> qemu-kvm-rhev-0.12.1.2-2.355.el6.3.x86_64
>>>> qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
>>>> qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.8.x86_64
>>>>
>>>> Engine:
>>>> ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
>>>> ovirt-release34-1.0.2-1.noarch
>>>> ovirt-engine-dbscripts-3.4.3-1.el6.noarch
>>>> ovirt-release-el6-9-1.noarch
>>>> ovirt-iso-uploader-3.4.0-1.el6.noarch
>>>> ovirt-engine-lib-3.4.3-1.el6.noarch
>>>> ovirt-engine-backend-3.4.3-1.el6.noarch
>>>> ovirt-engine-websocket-proxy-3.4.3-1.el6.noarch
>>>> ovirt-engine-userportal-3.4.3-1.el6.noarch
>>>> ovirt-engine-setup-base-3.4.3-1.el6.noarch
>>>> ovirt-host-deploy-java-1.2.2-1.el6.noarch
>>>> ovirt-engine-cli-3.3.0.6-1.el6.noarch
>>>> ovirt-engine-setup-3.4.3-1.el6.noarch
>>>> ovirt-engine-restapi-3.4.3-1.el6.noarch
>>>> ovirt-engine-setup-plugin-ovirt-engine-3.4.3-1.el6.noarch
>>>> ovirt-engine-webadmin-portal-3.4.3-1.el6.noarch
>>>> ovirt-image-uploader-3.4.0-1.el6.noarch
>>>> ovirt-engine-tools-3.4.3-1.el6.noarch
>>>> ovirt-engine-setup-plugin-websocket-proxy-3.4.3-1.el6.noarch
>>>> ovirt-host-deploy-1.2.2-1.el6.noarch
>>>> ovirt-log-collector-3.4.1-1.el6.noarch
>>>> ovirt-engine-3.4.3-1.el6.noarch
>>>> ovirt-engine-setup-plugin-ovirt-engine-common-3.4.3-1.el6.noarch
>>>>
>>>> I set my cluster compatibility to 3.4 after the upgrade as well.
>>>>
>>>> Thank you!
>>>>
>>>> Regards.
>>>>
>>>> Neil Wilson.
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
More information about the Users
mailing list