So I tried making a new cluster with a 4.2 compatibility level and moving one of my EPYC hosts into it. I then updated the host to 4.3 and switched the cluster version 4.3 + set cluster cpu to the new AMD EPYC IBPD SSBD (also tried plain AMD EPYC). It still fails to make the host operational complaining that 'CPU type is not supported in this cluster compatibility version or is not supported at all'.

I tried a few iterations of updating, moving, activating, reinstalling, etc, but none of them seem to work.

The hosts are running CentOS Linux release 7.6.1810 (Core), all packages are up to date.

I checked my CPU flags, and I can't see anything missing.

cat /proc/cpuinfo | head -n 26
processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 23
model           : 1
model name      : AMD EPYC 7551P 32-Core Processor
stepping        : 2
microcode       : 0x8001227
cpu MHz         : 2000.000
cache size      : 512 KB
physical id     : 0
siblings        : 64
core id         : 0
cpu cores       : 32
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid amd_dcm aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb hw_pstate sme retpoline_amd ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca
bogomips        : 3992.39
TLB size        : 2560 4K pages
clflush size    : 64
cache_alignment : 64
address sizes   : 43 bits physical, 48 bits virtual
power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14]

I just can't seem to figure out why 4.3 does not like these EPYC systems. We have Skylake and Sandybridge clusters that are full on 4.3 now with the SSBD variant CPUs.

I'm at a loss as to what to try next. Only thing I can think of is to reinstall the host OS or try ovirt-node,  but I would like to avoid that if I can.

Thank you for all the help so far.

Regards,

Ryan

On Fri, Feb 8, 2019 at 9:07 AM Ryan Bullock <rrb3942@gmail.com> wrote:
This procedure worked for our HE, which is on Skylake.

I think I have a process that should work for moving our EPYC clusters to 4.3. If it works this weekend I will post it for others.

Ryan

On Thu, Feb 7, 2019 at 12:06 PM Simone Tiraboschi <stirabos@redhat.com> wrote:


On Thu, Feb 7, 2019 at 7:15 PM Juhani Rautiainen <juhani.rautiainen@gmail.com> wrote:


On Thu, Feb 7, 2019 at 6:52 PM Simone Tiraboschi <stirabos@redhat.com> wrote:


For an hosted-engine cluster we have a manual workaround procedure documented here:
 
I managed to upgrade my Epyc cluster with those steps. I made new cluster with Epyc CPU Type and cluster already in 4.3 level. Starting engine in new cluster complained something about not finding vm with that uuid but it still started engine fine. When all nodes were in new cluster I still couldn't upgrade old cluster because engine was complaining that couple of VM's couldn't be upgraded (something to do with custom level). I moved them to new cluster too. Had to just change networks to management for the move. After that I could upgrade old cluster to Epyc and 4.3 level. Then I just moved VM's and nodes back (same steps but backwards). After that you can remove the extra cluster and raise datacenter to 4.3 level.

-Juhani 


Thanks for the report!
We definitively have to  figure out a better upgrade flow when a cluster CPU change is required/advised.