On 05.03.2013 10:54, Dan Kenigsberg wrote:
> On Tue, Mar 05, 2013 at 10:21:16AM +0100, Patrick Hurrelmann wrote:
>> On 04.03.2013 21:52, Itamar Heim wrote:
>>> On 04/03/2013 19:03, Patrick Hurrelmann wrote:
>>>> Hi list,
>>>>
>>>> I tested the upcoming CentOS 6.4 release with my lab installation of
>>>> oVirt 3.1 and it fails to play well.
>>>>
>>>> Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type
>>>> Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster
are
>>>> both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
>>>>
>>>> In CentOS 6.3 all is fine and the following rpms are installed:
>>>>
>>>> libvirt.x86_64 0.9.10-21.el6_3.8
>>>> libvirt-client.x86_64 0.9.10-21.el6_3.8
>>>> libvirt-lock-sanlock.x86_64 0.9.10-21.el6_3.8
>>>> libvirt-python.x86_64 0.9.10-21.el6_3.8
>>>> vdsm.x86_64 4.10.0-0.46.15.el6
>>>> vdsm-cli.noarch 4.10.0-0.46.15.el6
>>>> vdsm-python.x86_64 4.10.0-0.46.15.el6
>>>> vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6
>>>> qemu-kvm.x86_64 2:0.12.1.2-2.295.el6_3.10
>>>>
>>>>
>>>> uname -a
>>>> Linux
vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6
>>>> 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
>>>>
>>>> virsh cpu capabilities on 6.3:
>>>> <cpu>
>>>> <arch>x86_64</arch>
>>>> <model>Nehalem</model>
>>>> <vendor>Intel</vendor>
>>>> <topology sockets='1' cores='4'
threads='1'/>
>>>> <feature name='rdtscp'/>
>>>> <feature name='pdcm'/>
>>>> <feature name='xtpr'/>
>>>> <feature name='tm2'/>
>>>> <feature name='est'/>
>>>> <feature name='smx'/>
>>>> <feature name='vmx'/>
>>>> <feature name='ds_cpl'/>
>>>> <feature name='monitor'/>
>>>> <feature name='dtes64'/>
>>>> <feature name='pbe'/>
>>>> <feature name='tm'/>
>>>> <feature name='ht'/>
>>>> <feature name='ss'/>
>>>> <feature name='acpi'/>
>>>> <feature name='ds'/>
>>>> <feature name='vme'/>
>>>> </cpu>
>>>>
>>>> and corresponding cpu features from vdsClient:
>>>>
>>>> cpuCores = 4
>>>> cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,
>>>> cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
>>>> tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
>>>> pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
>>>> dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
>>>>
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
>>>> flexpriority,ept,vpid,model_Conroe,model_Penryn,
>>>> model_Nehalem
>>>> cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz
>>>> cpuSockets = 1
>>>> cpuSpeed = 2394.132
>>>>
>>>>
>>>> So the system was updated to 6.4 using the continuous release repo.
>>>>
>>>> Installed rpms after update to 6.4 (6.3 + CR):
>>>>
>>>> libvirt.x86_64 0.10.2-18.el6
>>>> libvirt-client.x86_64 0.10.2-18.el6
>>>> libvirt-lock-sanlock.x86_64 0.10.2-18.el6
>>>> libvirt-python.x86_64 0.10.2-18.el6
>>>> vdsm.x86_64 4.10.0-0.46.15.el6
>>>> vdsm-cli.noarch 4.10.0-0.46.15.el6
>>>> vdsm-python.x86_64 4.10.0-0.46.15.el6
>>>> vdsm-xmlrpc.noarch 4.10.0-0.46.15.el6
>>>> qemu-kvm.x86_64 2:0.12.1.2-2.355.el6_4_4.1
>>>>
>>>>
>>>> uname -a
>>>> Linux
vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27
>>>> 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
>>>>
>>>> virsh capabilities on 6.4:
>>>> <cpu>
>>>> <arch>x86_64</arch>
>>>> <model>Nehalem</model>
>>>> <vendor>Intel</vendor>
>>>> <topology sockets='1' cores='4'
threads='1'/>
>>>> <feature name='rdtscp'/>
>>>> <feature name='pdcm'/>
>>>> <feature name='xtpr'/>
>>>> <feature name='tm2'/>
>>>> <feature name='est'/>
>>>> <feature name='smx'/>
>>>> <feature name='vmx'/>
>>>> <feature name='ds_cpl'/>
>>>> <feature name='monitor'/>
>>>> <feature name='dtes64'/>
>>>> <feature name='pbe'/>
>>>> <feature name='tm'/>
>>>> <feature name='ht'/>
>>>> <feature name='ss'/>
>>>> <feature name='acpi'/>
>>>> <feature name='ds'/>
>>>> <feature name='vme'/>
>>>> </cpu>
>>>>
>>>> and corresponding cpu features from vdsClient:
>>>>
>>>> cpuCores = 4
>>>> cpuFlags =
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,
>>>> cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
>>>> tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
>>>> pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
>>>> dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
>>>>
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
>>>> flexpriority,ept,vpid,model_coreduo,model_Conroe
>>>> cpuModel = Intel(R) Xeon(R) CPU X3430 @ 2.40GHz
>>>> cpuSockets = 1
>>>> cpuSpeed = 2394.098
>>>>
>>>> Full outputs of virsh capabilities and vdsCaps are attached. The only
>>>> difference I can see is that 6.4 exposes one additional cpu flags (sep)
>>>> and this seems to break the cpu recognition of vdsm.
>>>>
>>>> Anyone has some hints on how to resolve or debug this further? What
more
>>>> information can I provide to help?
>>>>
>>>> Best regards
>>>> Patrick
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>
>>> seems like a vdsm issue - can you check if you have this patch (not sure
>>> its related):
>>>
>>> commit 558994f8ffe030acd1b851dfd074f3417681337b
>>> Author: Mark Wu <wudxw(a)linux.vnet.ibm.com>
>>> Date: Tue Oct 9 11:18:10 2012 +0800
>>>
>>> Fix a bug in the naming convertion of cpu feature 'sse4_x'
>>>
>>> The list slice opertion missed the last character. It caused
>>> the feature name couldn't get chance to be converted into
>>> libvirt's naming.
>>>
>>> Change-Id: Ia241b09c96fa16441ba9421f61a2f9a417f0d978
>>> Signed-off-by: Mark Wu <wudxw(a)linux.vnet.ibm.com>
>>> Reviewed-on:
http://gerrit.ovirt.org/8413
>>> Reviewed-by: Dan Kenigsberg <danken(a)redhat.com>
>>> Reviewed-by: Michal Skrivanek <michal.skrivanek(a)redhat.com>
>>>
>>> diff --git a/vdsm/libvirtvm.py b/vdsm/libvirtvm.py
>>> index 64a3b6b..bd43b56 100644
>>> --- a/vdsm/libvirtvm.py
>>> +++ b/vdsm/libvirtvm.py
>>> @@ -735,7 +735,7 @@ class _DomXML:
>>>
>>> for feature in features[1:]:
>>> # convert Linux name of feature to libvirt
>>> - if feature[1:5] == 'sse4_':
>>> + if feature[1:6] == 'sse4_':
>>> feature = feature[0] + 'sse4.' + feature[6:]
>>>
>>> f = self.doc.createElement('feature')
>>>
>>
>> My version of vdsm as stated by Dreyou:
>> v 4.10.0-0.46 (.15), builded from
>> b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)
>>
>> I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to
>> 3.1 Branch?
>>
>> I applied that patch locally and restarted vdsmd but this does not
>> change anything. Supported cpu is still as low as Conroe instead of
>> Nehalem. Or is there more to do than patching libvirtvm.py?
>
> What is libvirt's opinion about your cpu compatibility?
>
> virsh -r cpu-compare <(echo '<cpu
match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>')
>
> If you do not get "Host CPU is a superset of CPU described in bla", then
> the problem is within libvirt.
>
> Dan.
Hi Dan,
virsh -r cpu-compare <(echo '<cpu
match="minimum"><model>Nehalem</model><vendor>Intel</vendor></cpu>')
Host CPU is a superset of CPU described in /dev/fd/63
So libvirt obviously is fine. Something different would have surprised
my as virsh capabilities seemed correct anyway.
So maybe, just maybe, libvirt has changed their cpu_map, a map that
ovirt-3.1 had a bug reading.
Would you care to apply