Re: EPYC CPU not being detected correctly on cluster
by Lucia Jelinkova
Hi Vinícius,
I am glad you've managed to solve it and thanks for sharing your findings.
Lucia
On Wed, Nov 25, 2020 at 9:07 PM Vinícius Ferrão <ferrao(a)versatushpc.com.br>
wrote:
> Lucia, I ended figuring out.
>
>
>
> The culprit is that I was pinned with the wrong virt module; after running
> this commands I was able to have the CPU properly detected:
>
>
>
> # dnf module reset virt
>
> # dnf module enable virt:8.3
>
> # dnf upgrade –nobest
>
>
>
> I think virt was in 8.2.
>
>
>
> Thank you!
>
>
>
> *From:* Lucia Jelinkova <ljelinko(a)redhat.com>
> *Sent:* Monday, November 23, 2020 6:25 AM
> *To:* Vinícius Ferrão <ferrao(a)versatushpc.com.br>
> *Cc:* users <users(a)ovirt.org>
> *Subject:* Re: [ovirt-users] EPYC CPU not being detected correctly on
> cluster
>
>
>
> Hi Vinícius,
>
>
>
> Thank you for the libvirt output - libvirt marked the EPYC CPU as not
> usable. Let's query qemu why that is. You do not need an oVirt VM to do
> that, just any VM running on qemu, e.g. created by Virtual Machines Manager
> or you can follow the command from the answer here:
>
>
>
>
> https://unix.stackexchange.com/questions/309788/how-to-create-a-vm-from-s...
>
>
>
> Then you can use the following commands:
>
> sudo virsh list --all
>
> sudo virsh qemu-monitor-command [your-vm's-name] --pretty
> '{"execute":"query-cpu-definitions"}'
>
>
>
> I do not know if this could be related to UEFI Firmware, lets check the
> qemu output first.
>
>
>
> Regards,
>
>
>
> Lucia
>
>
>
>
>
> On Fri, Nov 20, 2020 at 4:07 PM Vinícius Ferrão <ferrao(a)versatushpc.com.br>
> wrote:
>
> Hi Lucia,
>
>
>
> I had to create an user for virsh:
>
> # saslpasswd2 -a libvirt test
>
> Password:
>
> Again (for verification):
>
>
>
> With that in mind, here’s the outputs:
>
>
>
> <domainCapabilities>
>
> <path>/usr/libexec/qemu-kvm</path>
>
> <domain>kvm</domain>
>
> <machine>pc-i440fx-rhel7.6.0</machine>
>
> <arch>x86_64</arch>
>
> <vcpu max='240'/>
>
> <iothreads supported='yes'/>
>
> <os supported='yes'>
>
> <enum name='firmware'/>
>
> <loader supported='yes'>
>
> <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
>
> <enum name='type'>
>
> <value>rom</value>
>
> <value>pflash</value>
>
> </enum>
>
> <enum name='readonly'>
>
> <value>yes</value>
>
> <value>no</value>
>
> </enum>
>
> <enum name='secure'>
>
> <value>no</value>
>
> </enum>
>
> </loader>
>
> </os>
>
> <cpu>
>
> <mode name='host-passthrough' supported='yes'/>
>
> <mode name='host-model' supported='yes'>
>
> <model fallback='forbid'>EPYC-IBPB</model>
>
> <vendor>AMD</vendor>
>
> <feature policy='require' name='x2apic'/>
>
> <feature policy='require' name='tsc-deadline'/>
>
> <feature policy='require' name='hypervisor'/>
>
> <feature policy='require' name='tsc_adjust'/>
>
> <feature policy='require' name='clwb'/>
>
> <feature policy='require' name='umip'/>
>
> <feature policy='require' name='spec-ctrl'/>
>
> <feature policy='require' name='stibp'/>
>
> <feature policy='require' name='arch-capabilities'/>
>
> <feature policy='require' name='ssbd'/>
>
> <feature policy='require' name='xsaves'/>
>
> <feature policy='require' name='cmp_legacy'/>
>
> <feature policy='require' name='perfctr_core'/>
>
> <feature policy='require' name='invtsc'/>
>
> <feature policy='require' name='clzero'/>
>
> <feature policy='require' name='wbnoinvd'/>
>
> <feature policy='require' name='amd-ssbd'/>
>
> <feature policy='require' name='virt-ssbd'/>
>
> <feature policy='require' name='rdctl-no'/>
>
> <feature policy='require' name='skip-l1dfl-vmentry'/>
>
> <feature policy='require' name='mds-no'/>
>
> <feature policy='require' name='pschange-mc-no'/>
>
> <feature policy='disable' name='monitor'/>
>
> <feature policy='disable' name='svm'/>
>
> </mode>
>
> <mode name='custom' supported='yes'>
>
> <model usable='yes'>qemu64</model>
>
> <model usable='yes'>qemu32</model>
>
> <model usable='no'>phenom</model>
>
> <model usable='yes'>pentium3</model>
>
> <model usable='yes'>pentium2</model>
>
> <model usable='yes'>pentium</model>
>
> <model usable='no'>n270</model>
>
> <model usable='yes'>kvm64</model>
>
> <model usable='yes'>kvm32</model>
>
> <model usable='no'>coreduo</model>
>
> <model usable='no'>core2duo</model>
>
> <model usable='no'>athlon</model>
>
> <model usable='yes'>Westmere-IBRS</model>
>
> <model usable='yes'>Westmere</model>
>
> <model usable='no'>Skylake-Server-noTSX-IBRS</model>
>
> <model usable='no'>Skylake-Server-IBRS</model>
>
> <model usable='no'>Skylake-Server</model>
>
> <model usable='no'>Skylake-Client-noTSX-IBRS</model>
>
> <model usable='no'>Skylake-Client-IBRS</model>
>
> <model usable='no'>Skylake-Client</model>
>
> <model usable='yes'>SandyBridge-IBRS</model>
>
> <model usable='yes'>SandyBridge</model>
>
> <model usable='yes'>Penryn</model>
>
> <model usable='no'>Opteron_G5</model>
>
> <model usable='no'>Opteron_G4</model>
>
> <model usable='yes'>Opteron_G3</model>
>
> <model usable='yes'>Opteron_G2</model>
>
> <model usable='yes'>Opteron_G1</model>
>
> <model usable='yes'>Nehalem-IBRS</model>
>
> <model usable='yes'>Nehalem</model>
>
> <model usable='no'>IvyBridge-IBRS</model>
>
> <model usable='no'>IvyBridge</model>
>
> <model usable='no'>Icelake-Server-noTSX</model>
>
> <model usable='no'>Icelake-Server</model>
>
> <model usable='no'>Icelake-Client-noTSX</model>
>
> <model usable='no'>Icelake-Client</model>
>
> <model usable='no'>Haswell-noTSX-IBRS</model>
>
> <model usable='no'>Haswell-noTSX</model>
>
> <model usable='no'>Haswell-IBRS</model>
>
> <model usable='no'>Haswell</model>
>
> <model usable='no'>EPYC-IBPB</model>
>
> <model usable='no'>EPYC</model>
>
> <model usable='no'>Dhyana</model>
>
> <model usable='no'>Cooperlake</model>
>
> <model usable='yes'>Conroe</model>
>
> <model usable='no'>Cascadelake-Server-noTSX</model>
>
> <model usable='no'>Cascadelake-Server</model>
>
> <model usable='no'>Broadwell-noTSX-IBRS</model>
>
> <model usable='no'>Broadwell-noTSX</model>
>
> <model usable='no'>Broadwell-IBRS</model>
>
> <model usable='no'>Broadwell</model>
>
> <model usable='yes'>486</model>
>
> </mode>
>
> </cpu>
>
> <devices>
>
> <disk supported='yes'>
>
> <enum name='diskDevice'>
>
> <value>disk</value>
>
> <value>cdrom</value>
>
> <value>floppy</value>
>
> <value>lun</value>
>
> </enum>
>
> <enum name='bus'>
>
> <value>ide</value>
>
> <value>fdc</value>
>
> <value>scsi</value>
>
> <value>virtio</value>
>
> <value>usb</value>
>
> <value>sata</value>
>
> </enum>
>
> <enum name='model'>
>
> <value>virtio</value>
>
> <value>virtio-transitional</value>
>
> <value>virtio-non-transitional</value>
>
> </enum>
>
> </disk>
>
> <graphics supported='yes'>
>
> <enum name='type'>
>
> <value>sdl</value>
>
> <value>vnc</value>
>
> <value>spice</value>
>
> </enum>
>
> </graphics>
>
> <video supported='yes'>
>
> <enum name='modelType'>
>
> <value>vga</value>
>
> <value>cirrus</value>
>
> <value>qxl</value>
>
> <value>virtio</value>
>
> <value>none</value>
>
> <value>bochs</value>
>
> <value>ramfb</value>
>
> </enum>
>
> </video>
>
> <hostdev supported='yes'>
>
> <enum name='mode'>
>
> <value>subsystem</value>
>
> </enum>
>
> <enum name='startupPolicy'>
>
> <value>default</value>
>
> <value>mandatory</value>
>
> <value>requisite</value>
>
> <value>optional</value>
>
> </enum>
>
> <enum name='subsysType'>
>
> <value>usb</value>
>
> <value>pci</value>
>
> <value>scsi</value>
>
> </enum>
>
> <enum name='capsType'/>
>
> <enum name='pciBackend'>
>
> <value>default</value>
>
> <value>vfio</value>
>
> </enum>
>
> </hostdev>
>
> <rng supported='yes'>
>
> <enum name='model'>
>
> <value>virtio</value>
>
> <value>virtio-transitional</value>
>
> <value>virtio-non-transitional</value>
>
> </enum>
>
> <enum name='backendModel'>
>
> <value>random</value>
>
> <value>egd</value>
>
> </enum>
>
> </rng>
>
> </devices>
>
> <features>
>
> <gic supported='no'/>
>
> <vmcoreinfo supported='yes'/>
>
> <genid supported='yes'/>
>
> <backingStoreInput supported='yes'/>
>
> <backup supported='no'/>
>
> <sev supported='yes'>
>
> <cbitpos>47</cbitpos>
>
> <reducedPhysBits>1</reducedPhysBits>
>
> </sev>
>
> </features>
>
> </domainCapabilities>
>
>
>
> Regarding the last two commands, I don’t have any VM running, since I
> cannot start anything on the engine.
>
>
>
> I’m starting to suspect that this may be something in the UEFI Firmware.
>
>
>
> Any thoughts?
>
>
>
> Thanks,
>
>
>
> *From:* Lucia Jelinkova <ljelinko(a)redhat.com>
> *Sent:* Friday, November 20, 2020 5:30 AM
> *To:* Vinícius Ferrão <ferrao(a)versatushpc.com.br>
> *Cc:* users <users(a)ovirt.org>
> *Subject:* Re: [ovirt-users] EPYC CPU not being detected correctly on
> cluster
>
>
>
> Hi,
>
>
>
> oVirt CPU detection depends on libvirt (and that depends on qemu) CPU
> models. Could you please run the following command to see what libvirt
> reports?
>
>
>
> virsh domcapabilities
>
>
>
> That should give you the list of CPUs known to libvirt with a usability
> flag for each CPU.
>
>
>
> If you find out that the CPU is not usable by libvirt, you might want to
> dig deeper by querying quemu directly.
>
>
>
> Locate any VM running on the system by
>
> sudo virsh list --all
>
>
>
> Use the name of a VM in the following command:
>
> sudo virsh qemu-monitor-command [your-vm's-name] --pretty
> '{"execute":"query-cpu-definitions"}'
>
>
>
> That would give you the list of all CPUs supported by qemu and it will
> list all cpu's features that are not available on your system.
>
>
>
> Regards,
>
>
>
> Lucia
>
>
>
> On Thu, Nov 19, 2020 at 9:38 PM Vinícius Ferrão via Users <users(a)ovirt.org>
> wrote:
>
> Hi
>
>
>
> I’ve an strange issue with two hosts (not using the hypervisor image) with
> EPYC CPUs, on the engine I got this message:
>
>
>
> The host CPU does not match the Cluster CPU Type and is running in a
> degraded mode. It is missing the following CPU flags: model_EPYC. Please
> update the host CPU microcode or change the Cluster CPU Type.
>
>
>
> But it is an EPYC CPU, the firmware is updated to the latest versions, but
> for some reason oVirt does not like it.
>
>
>
> Here’s the relevant output from VDSM:
>
> "cpuCores": "128",
>
> "cpuFlags":
> "ibs,vme,abm,sep,ssse3,perfctr_core,sse4_2,skip-l1dfl-vmentry,cx16,pae,misalignsse,avx2,smap,movbe,vgif,rdctl-no,extapic,clflushopt,de,sse4_1,xsaveerptr,perfctr_llc,fma,mca,sse,rdtscp,monitor,umip,mwaitx,cr8_legacy,mtrr,stibp,bmi2,pclmulqdq,amd-ssbd,lbrv,pdpe1gb,constant_tsc,vmmcall,f16c,ibrs,fsgsbase,invtsc,nopl,lm,3dnowprefetch,smca,ht,tsc_adjust,popcnt,cpb,bmi1,mmx,arat,aperfmperf,bpext,cqm_occup_llc,virt-ssbd,tce,pse,xsave,xgetbv1,topoext,sha_ni,amd_ppin,rdrand,cpuid,tsc_scale,extd_apicid,cqm,rep_good,tsc,sse4a,flushbyasid,pschange-mc-no,mds-no,ibpb,smep,clflush,tsc-deadline,fxsr,pat,avx,pfthreshold,v_vmsave_vmload,osvw,xsavec,cdp_l3,clzero,svm_lock,nonstop_tsc,adx,hw_pstate,spec-ctrl,arch-capabilities,xsaveopt,skinit,rdt_a,svm,rdpid,lahf_lm,fpu,rdseed,fxsr_opt,sse2,nrip_save,vmcb_clean,sme,cat_l3,cqm_mbm_local,irperf,overflow_recov,avic,mce,mmxext,msr,cx8,hypervisor,wdt,mba,nx,decodeassists,cmp_legacy,x2apic,perfctr_nb,succor,pni,xsaves,clwb,cqm_llc,syscall,apic,pge,npt,pse36,cmov,ssbd,pausefilter,sev,aes,wbnoinvd,cqm_mbm_total,spec_ctrl,model_qemu32,model_Opteron_G3,model_Nehalem-IBRS,model_qemu64,model_Conroe,model_kvm64,model_Penryn,model_SandyBridge,model_pentium,model_pentium2,model_kvm32,model_Nehalem,model_Opteron_G2,model_pentium3,model_Opteron_G1,model_SandyBridge-IBRS,model_486,model_Westmere-IBRS,model_Westmere",
>
> "cpuModel": "AMD EPYC 7H12 64-Core Processor",
>
> "cpuSockets": "2",
>
> "cpuSpeed": "3293.405",
>
> "cpuThreads": "256",
>
>
>
> Any ideia on why ou what to do to fix it?
>
>
>
> Thanks,
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WP6XL6ODTLJ...
>
>
4 years
Connecting OVN network to physical network
by Alex K
Hi all,
I have created some logical switches at the OVN network provider (the
default provider that is configured during engine-setup). This is working
fine.
I would like though to be able to connect a logical OVN switch to a
physical network at which the hosts have access, so as to give access to
guest VMs to this network. When connecting such switch to a physical
network the guest VMs do not have access to the physical network:
[image: image.png]
It seems that this should be supported from ovirt. Am I missing something?
P.S. I know I can provide such connectivity through standard networking,
though was wondering if this can be done with OVN.
Many thanx,
Alex
4 years
Re: EPYC CPU not being detected correctly on cluster
by Lucia Jelinkova
Hi Vinícius,
Thank you for the libvirt output - libvirt marked the EPYC CPU as not
usable. Let's query qemu why that is. You do not need an oVirt VM to do
that, just any VM running on qemu, e.g. created by Virtual Machines Manager
or you can follow the command from the answer here:
https://unix.stackexchange.com/questions/309788/how-to-create-a-vm-from-s...
Then you can use the following commands:
sudo virsh list --all
sudo virsh qemu-monitor-command [your-vm's-name] --pretty
'{"execute":"query-cpu-definitions"}'
I do not know if this could be related to UEFI Firmware, lets check the
qemu output first.
Regards,
Lucia
On Fri, Nov 20, 2020 at 4:07 PM Vinícius Ferrão <ferrao(a)versatushpc.com.br>
wrote:
> Hi Lucia,
>
>
>
> I had to create an user for virsh:
>
> # saslpasswd2 -a libvirt test
>
> Password:
>
> Again (for verification):
>
>
>
> With that in mind, here’s the outputs:
>
>
>
> <domainCapabilities>
>
> <path>/usr/libexec/qemu-kvm</path>
>
> <domain>kvm</domain>
>
> <machine>pc-i440fx-rhel7.6.0</machine>
>
> <arch>x86_64</arch>
>
> <vcpu max='240'/>
>
> <iothreads supported='yes'/>
>
> <os supported='yes'>
>
> <enum name='firmware'/>
>
> <loader supported='yes'>
>
> <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
>
> <enum name='type'>
>
> <value>rom</value>
>
> <value>pflash</value>
>
> </enum>
>
> <enum name='readonly'>
>
> <value>yes</value>
>
> <value>no</value>
>
> </enum>
>
> <enum name='secure'>
>
> <value>no</value>
>
> </enum>
>
> </loader>
>
> </os>
>
> <cpu>
>
> <mode name='host-passthrough' supported='yes'/>
>
> <mode name='host-model' supported='yes'>
>
> <model fallback='forbid'>EPYC-IBPB</model>
>
> <vendor>AMD</vendor>
>
> <feature policy='require' name='x2apic'/>
>
> <feature policy='require' name='tsc-deadline'/>
>
> <feature policy='require' name='hypervisor'/>
>
> <feature policy='require' name='tsc_adjust'/>
>
> <feature policy='require' name='clwb'/>
>
> <feature policy='require' name='umip'/>
>
> <feature policy='require' name='spec-ctrl'/>
>
> <feature policy='require' name='stibp'/>
>
> <feature policy='require' name='arch-capabilities'/>
>
> <feature policy='require' name='ssbd'/>
>
> <feature policy='require' name='xsaves'/>
>
> <feature policy='require' name='cmp_legacy'/>
>
> <feature policy='require' name='perfctr_core'/>
>
> <feature policy='require' name='invtsc'/>
>
> <feature policy='require' name='clzero'/>
>
> <feature policy='require' name='wbnoinvd'/>
>
> <feature policy='require' name='amd-ssbd'/>
>
> <feature policy='require' name='virt-ssbd'/>
>
> <feature policy='require' name='rdctl-no'/>
>
> <feature policy='require' name='skip-l1dfl-vmentry'/>
>
> <feature policy='require' name='mds-no'/>
>
> <feature policy='require' name='pschange-mc-no'/>
>
> <feature policy='disable' name='monitor'/>
>
> <feature policy='disable' name='svm'/>
>
> </mode>
>
> <mode name='custom' supported='yes'>
>
> <model usable='yes'>qemu64</model>
>
> <model usable='yes'>qemu32</model>
>
> <model usable='no'>phenom</model>
>
> <model usable='yes'>pentium3</model>
>
> <model usable='yes'>pentium2</model>
>
> <model usable='yes'>pentium</model>
>
> <model usable='no'>n270</model>
>
> <model usable='yes'>kvm64</model>
>
> <model usable='yes'>kvm32</model>
>
> <model usable='no'>coreduo</model>
>
> <model usable='no'>core2duo</model>
>
> <model usable='no'>athlon</model>
>
> <model usable='yes'>Westmere-IBRS</model>
>
> <model usable='yes'>Westmere</model>
>
> <model usable='no'>Skylake-Server-noTSX-IBRS</model>
>
> <model usable='no'>Skylake-Server-IBRS</model>
>
> <model usable='no'>Skylake-Server</model>
>
> <model usable='no'>Skylake-Client-noTSX-IBRS</model>
>
> <model usable='no'>Skylake-Client-IBRS</model>
>
> <model usable='no'>Skylake-Client</model>
>
> <model usable='yes'>SandyBridge-IBRS</model>
>
> <model usable='yes'>SandyBridge</model>
>
> <model usable='yes'>Penryn</model>
>
> <model usable='no'>Opteron_G5</model>
>
> <model usable='no'>Opteron_G4</model>
>
> <model usable='yes'>Opteron_G3</model>
>
> <model usable='yes'>Opteron_G2</model>
>
> <model usable='yes'>Opteron_G1</model>
>
> <model usable='yes'>Nehalem-IBRS</model>
>
> <model usable='yes'>Nehalem</model>
>
> <model usable='no'>IvyBridge-IBRS</model>
>
> <model usable='no'>IvyBridge</model>
>
> <model usable='no'>Icelake-Server-noTSX</model>
>
> <model usable='no'>Icelake-Server</model>
>
> <model usable='no'>Icelake-Client-noTSX</model>
>
> <model usable='no'>Icelake-Client</model>
>
> <model usable='no'>Haswell-noTSX-IBRS</model>
>
> <model usable='no'>Haswell-noTSX</model>
>
> <model usable='no'>Haswell-IBRS</model>
>
> <model usable='no'>Haswell</model>
>
> <model usable='no'>EPYC-IBPB</model>
>
> <model usable='no'>EPYC</model>
>
> <model usable='no'>Dhyana</model>
>
> <model usable='no'>Cooperlake</model>
>
> <model usable='yes'>Conroe</model>
>
> <model usable='no'>Cascadelake-Server-noTSX</model>
>
> <model usable='no'>Cascadelake-Server</model>
>
> <model usable='no'>Broadwell-noTSX-IBRS</model>
>
> <model usable='no'>Broadwell-noTSX</model>
>
> <model usable='no'>Broadwell-IBRS</model>
>
> <model usable='no'>Broadwell</model>
>
> <model usable='yes'>486</model>
>
> </mode>
>
> </cpu>
>
> <devices>
>
> <disk supported='yes'>
>
> <enum name='diskDevice'>
>
> <value>disk</value>
>
> <value>cdrom</value>
>
> <value>floppy</value>
>
> <value>lun</value>
>
> </enum>
>
> <enum name='bus'>
>
> <value>ide</value>
>
> <value>fdc</value>
>
> <value>scsi</value>
>
> <value>virtio</value>
>
> <value>usb</value>
>
> <value>sata</value>
>
> </enum>
>
> <enum name='model'>
>
> <value>virtio</value>
>
> <value>virtio-transitional</value>
>
> <value>virtio-non-transitional</value>
>
> </enum>
>
> </disk>
>
> <graphics supported='yes'>
>
> <enum name='type'>
>
> <value>sdl</value>
>
> <value>vnc</value>
>
> <value>spice</value>
>
> </enum>
>
> </graphics>
>
> <video supported='yes'>
>
> <enum name='modelType'>
>
> <value>vga</value>
>
> <value>cirrus</value>
>
> <value>qxl</value>
>
> <value>virtio</value>
>
> <value>none</value>
>
> <value>bochs</value>
>
> <value>ramfb</value>
>
> </enum>
>
> </video>
>
> <hostdev supported='yes'>
>
> <enum name='mode'>
>
> <value>subsystem</value>
>
> </enum>
>
> <enum name='startupPolicy'>
>
> <value>default</value>
>
> <value>mandatory</value>
>
> <value>requisite</value>
>
> <value>optional</value>
>
> </enum>
>
> <enum name='subsysType'>
>
> <value>usb</value>
>
> <value>pci</value>
>
> <value>scsi</value>
>
> </enum>
>
> <enum name='capsType'/>
>
> <enum name='pciBackend'>
>
> <value>default</value>
>
> <value>vfio</value>
>
> </enum>
>
> </hostdev>
>
> <rng supported='yes'>
>
> <enum name='model'>
>
> <value>virtio</value>
>
> <value>virtio-transitional</value>
>
> <value>virtio-non-transitional</value>
>
> </enum>
>
> <enum name='backendModel'>
>
> <value>random</value>
>
> <value>egd</value>
>
> </enum>
>
> </rng>
>
> </devices>
>
> <features>
>
> <gic supported='no'/>
>
> <vmcoreinfo supported='yes'/>
>
> <genid supported='yes'/>
>
> <backingStoreInput supported='yes'/>
>
> <backup supported='no'/>
>
> <sev supported='yes'>
>
> <cbitpos>47</cbitpos>
>
> <reducedPhysBits>1</reducedPhysBits>
>
> </sev>
>
> </features>
>
> </domainCapabilities>
>
>
>
> Regarding the last two commands, I don’t have any VM running, since I
> cannot start anything on the engine.
>
>
>
> I’m starting to suspect that this may be something in the UEFI Firmware.
>
>
>
> Any thoughts?
>
>
>
> Thanks,
>
>
>
> *From:* Lucia Jelinkova <ljelinko(a)redhat.com>
> *Sent:* Friday, November 20, 2020 5:30 AM
> *To:* Vinícius Ferrão <ferrao(a)versatushpc.com.br>
> *Cc:* users <users(a)ovirt.org>
> *Subject:* Re: [ovirt-users] EPYC CPU not being detected correctly on
> cluster
>
>
>
> Hi,
>
>
>
> oVirt CPU detection depends on libvirt (and that depends on qemu) CPU
> models. Could you please run the following command to see what libvirt
> reports?
>
>
>
> virsh domcapabilities
>
>
>
> That should give you the list of CPUs known to libvirt with a usability
> flag for each CPU.
>
>
>
> If you find out that the CPU is not usable by libvirt, you might want to
> dig deeper by querying quemu directly.
>
>
>
> Locate any VM running on the system by
>
> sudo virsh list --all
>
>
>
> Use the name of a VM in the following command:
>
> sudo virsh qemu-monitor-command [your-vm's-name] --pretty
> '{"execute":"query-cpu-definitions"}'
>
>
>
> That would give you the list of all CPUs supported by qemu and it will
> list all cpu's features that are not available on your system.
>
>
>
> Regards,
>
>
>
> Lucia
>
>
>
> On Thu, Nov 19, 2020 at 9:38 PM Vinícius Ferrão via Users <users(a)ovirt.org>
> wrote:
>
> Hi
>
>
>
> I’ve an strange issue with two hosts (not using the hypervisor image) with
> EPYC CPUs, on the engine I got this message:
>
>
>
> The host CPU does not match the Cluster CPU Type and is running in a
> degraded mode. It is missing the following CPU flags: model_EPYC. Please
> update the host CPU microcode or change the Cluster CPU Type.
>
>
>
> But it is an EPYC CPU, the firmware is updated to the latest versions, but
> for some reason oVirt does not like it.
>
>
>
> Here’s the relevant output from VDSM:
>
> "cpuCores": "128",
>
> "cpuFlags":
> "ibs,vme,abm,sep,ssse3,perfctr_core,sse4_2,skip-l1dfl-vmentry,cx16,pae,misalignsse,avx2,smap,movbe,vgif,rdctl-no,extapic,clflushopt,de,sse4_1,xsaveerptr,perfctr_llc,fma,mca,sse,rdtscp,monitor,umip,mwaitx,cr8_legacy,mtrr,stibp,bmi2,pclmulqdq,amd-ssbd,lbrv,pdpe1gb,constant_tsc,vmmcall,f16c,ibrs,fsgsbase,invtsc,nopl,lm,3dnowprefetch,smca,ht,tsc_adjust,popcnt,cpb,bmi1,mmx,arat,aperfmperf,bpext,cqm_occup_llc,virt-ssbd,tce,pse,xsave,xgetbv1,topoext,sha_ni,amd_ppin,rdrand,cpuid,tsc_scale,extd_apicid,cqm,rep_good,tsc,sse4a,flushbyasid,pschange-mc-no,mds-no,ibpb,smep,clflush,tsc-deadline,fxsr,pat,avx,pfthreshold,v_vmsave_vmload,osvw,xsavec,cdp_l3,clzero,svm_lock,nonstop_tsc,adx,hw_pstate,spec-ctrl,arch-capabilities,xsaveopt,skinit,rdt_a,svm,rdpid,lahf_lm,fpu,rdseed,fxsr_opt,sse2,nrip_save,vmcb_clean,sme,cat_l3,cqm_mbm_local,irperf,overflow_recov,avic,mce,mmxext,msr,cx8,hypervisor,wdt,mba,nx,decodeassists,cmp_legacy,x2apic,perfctr_nb,succor,pni,xsaves,clwb,cqm_llc,syscall,apic,pge,npt,pse36,cmov,ssbd,pausefilter,sev,aes,wbnoinvd,cqm_mbm_total,spec_ctrl,model_qemu32,model_Opteron_G3,model_Nehalem-IBRS,model_qemu64,model_Conroe,model_kvm64,model_Penryn,model_SandyBridge,model_pentium,model_pentium2,model_kvm32,model_Nehalem,model_Opteron_G2,model_pentium3,model_Opteron_G1,model_SandyBridge-IBRS,model_486,model_Westmere-IBRS,model_Westmere",
>
> "cpuModel": "AMD EPYC 7H12 64-Core Processor",
>
> "cpuSockets": "2",
>
> "cpuSpeed": "3293.405",
>
> "cpuThreads": "256",
>
>
>
> Any ideia on why ou what to do to fix it?
>
>
>
> Thanks,
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WP6XL6ODTLJ...
>
>
4 years
Re: Ovirt 4 2 NIC's
by Dominik Holler
https://www.ovirt.org/documentation/administration_guide/#Designate_a_Spe...
documents how to use a display network.
A possible flow could be like this:
1. Create a new logical network, use a VLAN if you do not have a free NIC
on every host
2. Attach the new logical network and attach it in
Compute > Hosts > hostname > Network Interfaces > Setup Host Networks
to a free NIC (or an already used one if the network has a VLAN) for every
host.
This new network attachment should have an IP address.
3. Ensure that your network infrastructure allows the communication of your
users into the display network.
4. Assign the role "Display Network" in
Compute > Clusters > ClusterName > Logical Networks > Manage Networks
to the new network.
> El vie, 20 de nov. de 2020 a la(s) 06:01, Dominik Holler (
> dholler(a)redhat.com) escribió:
>
>>
>>
>> On Wed, Nov 18, 2020 at 7:30 PM Facundo Badaracco <varekoarfa(a)gmail.com>
>> wrote:
>>
>>> Hi everyone!
>>>
>>> Hope someone can help me with this..
>>>
>>> I have 3 servers with centos 8 and ovirt 4 installed. Each server has 2
>>> nic.
>>> Server A = HE (HA)
>>> Nic1= 192.169.2.24 Nic2=no ip
>>> Server B = HE (HA)
>>> Nic1= 192.169.2.25 Nic2=no ip
>>> Server C = simply host.
>>> Nic1= 192.169.2.26 Nic2=no ip
>>>
>>> How can i configure the second NIC in each server in order to use it for
>>> clients connect to the vms?. I want one nic for management, the other for
>>> connections.
>>>
>>
>>
>> If the clients want to access network services provided by the VMs,
>> you could create an additional logical network in oVirt, and attach it in
>> Compute > Hosts > hostname > Network Interfaces > Setup Host Networks
>> to Nic2 for every host. This new network attachment should have no IP
>> address.
>> After that, you could reference this new logical network in your VMs
>> virtual NICs.
>> If possible, I would recommend using a VLAN to provide isolation to the
>> management network.
>>
>> If the clients connect via SPICE or VNC, you would require a
>> dedicated display network.
>>
>>
>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PL7WFXC4UZB...
>>>
>>
4 years
Ovirt 4.4.3 Hyper-converged Deployment with GlusterFS
by rcpoling@gmail.com
Trying to deploy a 3 Node Hyperconverged Ovirt Cluster with Gluster as the backend storage. I have tried this against the three nodes that I have as well as with just a single node to get a working base line. The failure that I keep getting stuck on is:
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:59
failed: [ovirt01-storage.poling.local] (item={'key': 'gluster_vg_sdb', 'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => {"ansible_loop_var": "item", "changed": false, "err": " Device /dev/sdb excluded by a filter.\n", "item": {"key": "gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}]}, "msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
I have verified my dns records and have reverse dns set up. The Front End Network and Storage Networks are physical separated and are 10GB connections. In the reading i have done this seems to point to possibly being a multipath issue, but i do see multipath configs being set in the Gluster Wizard and when I check after the wizard fails out - it does look like the mpath is set correctly.
[root@ovirt01 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 446.1G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 445.1G 0 part
├─onn-pool00_tmeta 253:0 0 1G 0 lvm
│ └─onn-pool00-tpool 253:2 0 351.7G 0 lvm
│ ├─onn-ovirt--node--ng--4.4.3--0.20201110.0+1 253:3 0 314.7G 0 lvm /
│ ├─onn-pool00 253:5 0 351.7G 1 lvm
│ ├─onn-var_log_audit 253:6 0 2G 0 lvm /var/log/audit
│ ├─onn-var_log 253:7 0 8G 0 lvm /var/log
│ ├─onn-var_crash 253:8 0 10G 0 lvm /var/crash
│ ├─onn-var 253:9 0 15G 0 lvm /var
│ ├─onn-tmp 253:10 0 1G 0 lvm /tmp
│ ├─onn-home 253:11 0 1G 0 lvm /home
│ └─onn-ovirt--node--ng--4.4.2--0.20200918.0+1 253:12 0 314.7G 0 lvm
├─onn-pool00_tdata 253:1 0 351.7G 0 lvm
│ └─onn-pool00-tpool 253:2 0 351.7G 0 lvm
│ ├─onn-ovirt--node--ng--4.4.3--0.20201110.0+1 253:3 0 314.7G 0 lvm /
│ ├─onn-pool00 253:5 0 351.7G 1 lvm
│ ├─onn-var_log_audit 253:6 0 2G 0 lvm /var/log/audit
│ ├─onn-var_log 253:7 0 8G 0 lvm /var/log
│ ├─onn-var_crash 253:8 0 10G 0 lvm /var/crash
│ ├─onn-var 253:9 0 15G 0 lvm /var
│ ├─onn-tmp 253:10 0 1G 0 lvm /tmp
│ ├─onn-home 253:11 0 1G 0 lvm /home
│ └─onn-ovirt--node--ng--4.4.2--0.20200918.0+1 253:12 0 314.7G 0 lvm
└─onn-swap 253:4 0 4G 0 lvm [SWAP]
sdb 8:16 0 5.5T 0 disk
└─sdb1 8:17 0 5.5T 0 part /sdb
Looking for any pointers on what else I should be looking at to get gluster to deploy successfully. Thanks ~ R
4 years
Replacing ovirt certificates issue
by Alex K
Hi all,
I am trying to replace the ovirt certificate at ovirt 4.3 following this:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/...
I am doing the following:
I have engine FQDN: manager.lab.local
1. Create root CA private key:
openssl genrsa -des3 -out root.key 2048
2. Generate root certificate: (enter passphrase of root key)
openssl req -x509 -new -nodes -key root.key -sha256 -days 3650 -out root.pem
cp root.pem /tmp
3. Create key and CSR for engine:
openssl genrsa -out manager.lab.local.key 2048
openssl req -new -out manager.lab.local.csr -key manager.lab.local.key
4. Generate a certificate for engine and sign with the root CA key:
openssl x509 -req -in manager.lab.local.csr \
-CA root.pem \
-CAkey root.key \
-CAcreateserial \
-out manager.lab.local.crt \
-days 3650 \
-sha256 \
-extensions v3_req
5. Verify the trust chain and check the certificate details:
openssl verify -CAfile root.pem manager.lab.local.crt
openssl x509 -text -noout -in manager.lab.local.crt | head -15
6. Generate a P12 container: (with empty password)
openssl pkcs12 -export -out /tmp/apache.p12 \
-inkey manager.lab.local.key \
-in manager.lab.local.crt
8. Export key and cert:
openssl pkcs12 -in apache.p12 -nocerts -nodes > /tmp/apache.key
openssl pkcs12 -in apache.p12 -nokeys > /tmp/apache.cer
From the above steps we should have the following:
/tmp/root.pem
/tmp/apache.p12
/tmp/apache.key
/tmp/apache.cer
9. Place the certificates:
hosted-engine --set-maintenance --mode=global
cp -p /etc/pki/ovirt-engine/keys/apache.p12 /tmp/apache.p12.bck
cp /tmp/apache.p12 /etc/pki/ovirt-engine/keys/apache.p12
cp /tmp/root.pem /etc/pki/ca-trust/source/anchors
update-ca-trust
rm /etc/pki/ovirt-engine/apache-ca.pem
cp /tmp/root.pem /etc/pki/ovirt-engine/apache-ca.pem
Backup existing key and cert:
cp /etc/pki/ovirt-engine/keys/apache.key.nopass
/etc/pki/ovirt-engine/keys/apache.key.nopass.bck
cp /etc/pki/ovirt-engine/certs/apache.cer
/etc/pki/ovirt-engine/certs/apache.cer.bck
cp /tmp/apache.key /etc/pki/ovirt-engine/keys/apache.key.nopass
cp /tmp/apache.cer /etc/pki/ovirt-engine/certs/apache.cer
chown root:ovirt /etc/pki/ovirt-engine/keys/apache.key.nopass
chmod 640 /etc/pki/ovirt-engine/keys/apache.key.nopass
systemctl restart httpd.service
10. Create a new trust store configuration file:
vi /etc/ovirt-engine/engine.conf.d/99-custom-truststore.conf
ENGINE_HTTPS_PKI_TRUST_STORE="/etc/pki/java/cacerts"
ENGINE_HTTPS_PKI_TRUST_STORE_PASSWORD=""
11. Edit /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf :
vi /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf
SSL_CERTIFICATE=/etc/pki/ovirt-engine/certs/apache.cer
SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass
12. Edit /etc/ovirt-imageio-proxy/ovirt-imageio-proxy.conf:
vi /etc/ovirt-imageio-proxy/ovirt-imageio-proxy.conf
# Key file for SSL connections
ssl_key_file = /etc/pki/ovirt-engine/keys/apache.key.nopass
# Certificate file for SSL connections
ssl_cert_file = /etc/pki/ovirt-engine/certs/apache.cer
13. Import the certificate at system-wide java trust store
update-ca-trust extract
keytool -list -alias ovirt -keystore /etc/pki/java/cacerts
14. Restart services:
systemctl restart httpd.service
systemctl restart ovirt-provider-ovn.service
systemctl restart ovirt-imageio-proxy
systemctl restart ovirt-websocket-proxy
systemctl restart ovirt-engine.service
Following the above I get at engine GUI:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to find
valid certification path to requested target
I have tried also to run engine-setup in case it could fix anything (it
renewed the cert due to missing subjectAltName), and the above error still
persists.
I have tried several other suggestions from similar issues reported at this
list without any luck.
I have run out of ideas. Am I missing anything?
Thanx for any suggestions.
Alex
4 years