[Users] Host missing cpuFlags

Itamar Heim iheim at redhat.com
Fri May 4 13:50:00 UTC 2012


On 05/04/2012 03:16 PM, Nicholas Kesick wrote:
>
>
>  > Subject: Re: [Users] Host missing cpuFlags
>  > From: mburns at redhat.com
>  > To: iheim at redhat.com
>  > CC: cybertimber2000 at hotmail.com; users at ovirt.org;
> vdsm-devel at lists.fedorahosted.org
>  > Date: Fri, 4 May 2012 07:52:30 -0400
>  >
>  > On Fri, 2012-05-04 at 14:34 +0300, Itamar Heim wrote:
>  > > On 05/04/2012 02:26 PM, Nicholas Kesick wrote:
>  > > >
>  > > >
>  > > > > Date: Fri, 4 May 2012 08:14:14 +0300
>  > > > > From: iheim at redhat.com
>  > > > > To: cybertimber2000 at hotmail.com
>  > > > > CC: users at ovirt.org
>  > > > > Subject: Re: [Users] Host missing cpuFlags
>  > > > >
>  > > > > On 05/04/2012 06:49 AM, Nicholas Kesick wrote:
>  > > > > > I managed to get a host successfully added into oVirt Manager
> (Fedora16
>  > > > > > minimum install, then used the wiki RPM install method), but
> the last
>  > > > > > event reports "Host <hostname> moved to Non-operational state
> as host
>  > > > > > does not meet the cluster's minimum CPU level. Missing CPU
> features:
>  > > > > > CpuFlags"
>  > > > > >
>  > > > > > Can anyone shine some light on the problem? The CPU does support
>  > > > > > virtualization... and as far as I can tell from cat
> /proc/cpuinfo does
>  > > > > > does have cpu flags.
>  > > > > > flags : fpu vme de pse tsc msr *pae* mce cx8 apic sep mtrr
> pge mca cmov
>  > > > > > pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
> syscall nx lm
>  > > > > > constant_tsc pebs bts nopl pni dtes64 monitor ds_cpl *vmx*
> est cid cx16
>  > > > > > xtpr pdcm lahf_lm tpr_shadow
>  > > > >
>  > > > > what is the cpu level of the cluster?
>  > > > > what cluster compatibility level?
>  > > > > what does vdsClient -s 0 getVdsCaps shows for cpu flags?
>  > > > I didn't even know there was a setting for that until now. This
> probably
>  > > > explains it.
>  > > > CPU Level of cluster: Intel Conroe Family
>  > > > Cluster Compatibility Level: 3.0 (?)
>  > > > Output of vdsClient -S - getVdsCaps:
>  > > > vdsClient -s 0 getVdsCaps
>  > > > HBAInventory = {'iSCSI': [{'InitiatorName':
>  > > > 'iqn.1994-05.com.redhat:238a26703858'}], 'FC': []}
>  > > > ISCSIInitiatorName = iqn.1994-05.com.redhat:238a26703858
>  > > > bondings = {'bond4': {'hwaddr': '00:00:00:00:00:00', 'cfg': {},
>  > > > 'netmask': '', 'addr': '', 'slaves': []}, 'bond0': {'hwaddr':
>  > > > '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves':
>  > > > []}, 'bond1': {'hwaddr': '00:00:00:00:00:00', 'cfg': {},
> 'netmask': '',
>  > > > 'addr': '', 'slaves': []}, 'bond2': {'hwaddr': '00:00:00:00:00:00',
>  > > > 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []}, 'bond3':
> {'hwaddr':
>  > > > '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '',
> 'slaves': []}}
>  > > > clusterLevels = ['3.0']
>  > > > cpuCores = 2
>  > > > cpuFlags =
>  > > >
> fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,pebs,bts,nopl,pni,dtes64,monitor,ds_cpl,vmx,est,cid,cx16,xtpr,pdcm,lahf_lm,tpr_shadow,model_486,model_pentium,model_pentium2,model_pentium3,model_pentiumpro,model_qemu32,model_coreduo,model_Opteron_G1
>  > >
>  > > either libvirt isn't reporting this, or there is a vdsm bug
> filtering it
>  > > erroneously.
>  > > cc-ing vdsm-devel
>  >
>  > I may be completely off here, but isn't this the NX bit problem? You
>  > need to have the NX bit set in BIOS for the cpu_family flag to be set
>  > correctly.
>  >
>  > Mike
>  >
>
> Mike,
> I see NX in the cpuFlags list: ...syscall,nx,lm,... is that you are
> reffering to?
>
> Though I think it boils down to the fact that what I'm running it on is
> a Pentium D, which is the NetBurst architecture. NetBurst is below
> Conroe, so it would fail to meet the cluster compatibility level of Conroe.

indeed - no nx issue. but yes, this seems too old for the current 
families. you can tweak the definitions though to lower than conroe i 
guess via the config



More information about the Users mailing list