
Hey folks, new hardware arrived \o/ Installation as HCI was a bliss, with gluster et all. Deploying the hosted engine also worked until it came to the very last point: Health checks, which failed. vdsm.log: --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< 2019-11-15 09:54:02,588+0100 INFO (jsonrpc/4) [api.virt] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Down', 'exitMessage': 'the CPU is incompatible with host CPU: Host CPU does not provide required features: virt-ssbd', 'statusTime': '4344202670', 'vmId': '50ac6250-4c24-40fd-894c-bc248c4f6fa2', 'exitReason': 1, 'exitCode': 1}]} from=::1,37492, vmId=50ac6250-4c24-40fd-894c-bc248c4f6fa2 (api:54) --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< But: --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< [root@node01 vdsm]# cat /proc/cpuinfo | grep flags | tail -n 1 | grep -i --color ssb flags : fpu vme [...] ssbd [...] --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< Cpu is a --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< processor : 1 vendor_id : AuthenticAMD cpu family : 23 model : 49 model name : AMD EPYC 7282 16-Core Processor --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< Anyone willing to shed some light on this issue? Thanks in advance! -Chris. -- Christian Reiss - email@christian-reiss.de /"\ ASCII Ribbon support@alpha-labs.net \ / Campaign X against HTML WEB alpha-labs.net / \ in eMails GPG Retrieval https://gpg.christian-reiss.de GPG ID ABCD43C5, 0x44E29126ABCD43C5 GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5 "It's better to reign in hell than to serve in heaven.", John Milton, Paradise lost.

One addendum: The wild thing is that during deployment up to stage 5 the vm is up and running: --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< [root@node01 ~]# virsh -r list --all Id Name State ---------------------------------------------------- 1 HostedEngineLocal running --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< It is only after clicking "Finish Deployment" and the final stages that things break with the CPU flags. On 15/11/2019 10:00, Christian Reiss wrote:
Hey folks,
new hardware arrived \o/ Installation as HCI was a bliss, with gluster et all.
Deploying the hosted engine also worked until it came to the very last point: Health checks, which failed.
vdsm.log: --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< 2019-11-15 09:54:02,588+0100 INFO (jsonrpc/4) [api.virt] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Down', 'exitMessage': 'the CPU is incompatible with host CPU: Host CPU does not provide required features: virt-ssbd', 'statusTime': '4344202670', 'vmId': '50ac6250-4c24-40fd-894c-bc248c4f6fa2', 'exitReason': 1, 'exitCode': 1}]} from=::1,37492, vmId=50ac6250-4c24-40fd-894c-bc248c4f6fa2 (api:54) --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
But: --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< [root@node01 vdsm]# cat /proc/cpuinfo | grep flags | tail -n 1 | grep -i --color ssb flags : fpu vme [...] ssbd [...] --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
Cpu is a --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< processor : 1 vendor_id : AuthenticAMD cpu family : 23 model : 49 model name : AMD EPYC 7282 16-Core Processor --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
Anyone willing to shed some light on this issue?
Thanks in advance! -Chris.
-- Christian Reiss - email@christian-reiss.de /"\ ASCII Ribbon support@alpha-labs.net \ / Campaign X against HTML WEB alpha-labs.net / \ in eMails GPG Retrieval https://gpg.christian-reiss.de GPG ID ABCD43C5, 0x44E29126ABCD43C5 GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5 "It's better to reign in hell than to serve in heaven.", John Milton, Paradise lost.

Mh, even more info: The XML of the hosted engine reports: --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< <cpu match="exact"> <model>EPYC</model> <feature name="ibpb" policy="require"/> <feature name="virt-ssbd" policy="require"/> <topology cores="4" sockets="16" threads="1"/> <numa> <cell cpus="0,1,2,3" id="0" memory="16777216"/> </numa> </cpu> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< and in there is the culprit. Unfortunately I have no clue on how to proceed from here. On 15/11/2019 10:00, Christian Reiss wrote:
Hey folks,
new hardware arrived \o/ Installation as HCI was a bliss, with gluster et all.
Deploying the hosted engine also worked until it came to the very last point: Health checks, which failed.
vdsm.log: --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< 2019-11-15 09:54:02,588+0100 INFO (jsonrpc/4) [api.virt] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Down', 'exitMessage': 'the CPU is incompatible with host CPU: Host CPU does not provide required features: virt-ssbd', 'statusTime': '4344202670', 'vmId': '50ac6250-4c24-40fd-894c-bc248c4f6fa2', 'exitReason': 1, 'exitCode': 1}]} from=::1,37492, vmId=50ac6250-4c24-40fd-894c-bc248c4f6fa2 (api:54) --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
But: --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< [root@node01 vdsm]# cat /proc/cpuinfo | grep flags | tail -n 1 | grep -i --color ssb flags : fpu vme [...] ssbd [...] --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
Cpu is a --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< processor : 1 vendor_id : AuthenticAMD cpu family : 23 model : 49 model name : AMD EPYC 7282 16-Core Processor --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
Anyone willing to shed some light on this issue?
Thanks in advance! -Chris.
-- Christian Reiss - email@christian-reiss.de /"\ ASCII Ribbon support@alpha-labs.net \ / Campaign X against HTML WEB alpha-labs.net / \ in eMails GPG Retrieval https://gpg.christian-reiss.de GPG ID ABCD43C5, 0x44E29126ABCD43C5 GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5 "It's better to reign in hell than to serve in heaven.", John Milton, Paradise lost.

Hi Christian, I'd say that the CPUs aren't perfectly uniform in terms of capabilities and microcode patches. "ssbd" is a speculative store bypass, as far as I know and if your host doesn't have the µ-code patches installed but your cluster definition has them (based typically on the machine used to install the hosted-engine), then you either need to lower your base in the hosted-engine VM (and restart it), or patch the host so it delivers on the mitigation. All this Spectre stuff is creating quite a bit of extra work and I try to just keep them out of my clusters, because I have no potential for hostile workloads on them (nor data worth exploiting). But it's clear that production environments with compliance requirements need to manage this carefully.

After re-reading... The primary host determines the CPU base requirements. But in this case the base may be newer than what the canned hosted image for the hosted-engine supports initially (before you update it). So by deactivating the mitigations temporarily via a boot flag on the host, you can keep those features from the requirements list, allowing the installation to go through. Once OS/patches on host and VM are in alignment you can re-activate the mitigations and the baseline on the cluster and reboot the hosted-engine to align everything (or just keep the cluster baseline low, if you don't care about the latest features and patches or want to have several generations of hardware work alongside).

Hey, sounds wild; using oVirt node installer; so microcode updates /shrug. The grub flag seems promissing. Anything you could help me with to oimit all that? I would be happy to see the ovirt engine today... somehow. Cheers! -Chris. On 15/11/2019 12:47, thomas@hoberg.net wrote:
After re-reading...
The primary host determines the CPU base requirements. But in this case the base may be newer than what the canned hosted image for the hosted-engine supports initially (before you update it).
So by deactivating the mitigations temporarily via a boot flag on the host, you can keep those features from the requirements list, allowing the installation to go through.
Once OS/patches on host and VM are in alignment you can re-activate the mitigations and the baseline on the cluster and reboot the hosted-engine to align everything (or just keep the cluster baseline low, if you don't care about the latest features and patches or want to have several generations of hardware work alongside). _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KJXV3R3HT2MBU3...
-- Christian Reiss - email@christian-reiss.de /"\ ASCII Ribbon support@alpha-labs.net \ / Campaign X against HTML WEB alpha-labs.net / \ in eMails GPG Retrieval https://gpg.christian-reiss.de GPG ID ABCD43C5, 0x44E29126ABCD43C5 GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5 "It's better to reign in hell than to serve in heaven.", John Milton, Paradise lost.

I have a somewhat similar issue because I use J5005 based Atom boxes for oVirt in the home-lab, but these fail strangely during installation and are just hair-tairingly slow during installation. So I move to a Kaby-Lake desktop for the installation and then need to downgrade all the way to Nehalem (no IBRS SSBD MDS etc.) to enable live migration to Gemini-Lake. I then down the installation node, move the SSD to the first Atom, reboot and voilà, it all works... ...but only as long as they don't push the KVM and oVirt baseline up beyond Nehalem. Now with AMD, that platform is rapidly evolving so all layers in this oVirt stack need to be aligned, which could take a while. There is a definite operational advantage to using older hardware in this space. Since there is no guarantee that the oVirt node image and the hosted-engine image are aligned, I'd recommend disabling all mitigations during the host's boot (only got a list of the Intel flags, sorry: Not rich enough for EPYC) and see if that sails through. And if you have no mitigation risk issues, to keep the base CPU definition as low as you can stand (your VMs applications could miss out on some nice instruction extensions or other features if you go rock-bottom). Most of the KVM config is generated at run-time with lots of Python stuff deep inside oVirt, so really apart from working with the boot flags (or another temporary host) I see no alternative. BTW I also had to fiddle with net.ifnames=0 to reenable ethX Ethernet naming, because otherwise the overlay network encodes the "new device" names into the config, which derails the hardware swap after the initial setup. I run with a CentOS base, because most of the workloads are actually Docker/podman containers and oVirt is more of a side show for now. And while I update frequently, I disable all mitigations for lack of exposure and to not slow these poor Atoms any further. I use them for 24x7 functional testing not for crunching numbers. With 32GB of RAM and a 1TB SSD they are just big enough for that at 10Watts/unit and passive cooling. Corporate labs has kick-ass Xeon-SP and Nvidia V100s, still mostly Docker because GPUs in KVM and oVirt are tricks I still need to master. Looking forward to the integrated container/VM future RH is planning there. Viel Glück!

On 15/11/2019 13:30, thomas@hoberg.net wrote:
Since there is no guarantee that the oVirt node image and the hosted-engine image are aligned, I'd recommend disabling all mitigations during the host's boot (only got a list of the Intel flags, sorry: Not rich enough for EPYC) and see if that sails through. And if you have no mitigation risk issues, to keep the base CPU definition as low as you can stand (your VMs applications could miss out on some nice instruction extensions or other features if you go rock-bottom).
Hey, Ugh, I am at a loss. I added --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< GRUB_CMDLINE_LINUX='crashkernel=auto rd.lvm.lv=onn/ovirt-node-ng-4.3.6-0.20190926.0+1 rd.lvm.lv=onn/swap mitigations=off rhgb quiet' --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< to /etc/default/grub, created a new grub.cfg and rebooted. --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< [root@node01 ~]# cat /proc/cmdline BOOT_IMAGE=/ovirt-node-ng-4.3.6-0.20190926.0+1/vmlinuz-3.10.0-1062.1.1.el7.x86_64 root=/dev/onn/ovirt-node-ng-4.3.6-0.20190926.0+1 ro crashkernel=auto rd.lvm.lv=onn/swap mitigations=off rhgb quiet rd.lvm.lv=onn/ovirt-node-ng-4.3.6-0.20190926.0+1 img.bootid=ovirt-node-ng-4.3.6-0.20190926.0+1 --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< Even after clearing the cache and restarting libvirt the issue is still --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< [root@node01 ~]# cat /var/cache/libvirt/qemu/capabilities/3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml | grep ssb <property name='ssbd' type='boolean' value='false'/> <property name='virt-ssbd' type='boolean' value='false'/> <property name='ssbd' type='boolean' value='false'/> <property name='virt-ssbd' type='boolean' value='false'/> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< and flags are still set (duh) --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< [root@node01 ~]# grep ssbd /proc/cpuinfo | tail -n1 flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip overflow_recov succor smca --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< Deploying the oVirt hosted engine still works up to the final point, when it stops with the usual --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< 2019-11-15 14:43:54,758+0100 INFO (jsonrpc/6) [api.virt] FINISH getStats return={'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Down', 'exitMessage': 'the CPU is incompatible with host CPU: Host CPU does not provide required features: virt-ssbd', 'statusTime': '4294738860', 'vmId': 'd116b296-9ae7-4ff3-80b4-73dc228a7b64', 'exitReason': 1, 'exitCode': 1}]} from=::1,46514, vmId=d116b296-9ae7-4ff3-80b4-73dc228a7b64 (api:54) --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< I can see that during the final stages (up to this point the engine VM is up and running) in vdsm.log there is a (super long) line: --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< 2019-11-15 13:36:10,248+0100 INFO (jsonrpc/4) [api.virt] FINISH create return={'status': {'message': 'Done', 'code': 0}, 'vmList': {'status': 'WaitForLaunch', 'maxMemSize': 65536, 'acpiEnable': 'true', 'emulatedMachine': 'pc-i440fx-rhel7.6.0', 'numOfIoThreads': '1', 'vmId': 'd116b296-9ae7-4ff3-80b4-73dc228a7b64', 'memGuaranteedSize': 1024, 'timeOffset': '0', 'smpThreadsPerCore': '1', 'cpuType': 'EPYC', 'guestDiskMapping': {}, 'arch': 'x86_64', 'smp': '4', 'guestNumaNodes': [{'nodeIndex': 0, 'cpus': '0,1,2,3', 'memory': '16384'}], u'xml': u'<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n<domain xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" type="kvm"><name>HostedEngine</name><uuid>d116b296-9ae7-4ff3-80b4-73dc228a7b64</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><iothreads>1</iothreads><maxMemory slots="16">67108864</maxMemory><vcpu current="4">64</vcpu><sysinfo type="smbios"><system><entry name="manufacturer">oVirt</entry><entry name="product">OS-NAME:</entry><entry name="version">OS-VERSION:</entry><entry name="serial">HOST-SERIAL:</entry><entry name="uuid">d116b296-9ae7-4ff3-80b4-73dc228a7b64</entry></system></sysinfo><clock offset="variable" adjustment="0"><timer name="rtc" tickpolicy="catchup"/><timer name="pit" tickpolicy="delay"/><timer name="hpet" present="no"/></clock><features><acpi/></features><cpu match="exact"><model>EPYC</model><feature name="ibpb" policy="require"/><feature name="virt-ssbd" policy="require"/><topology cores="4" threads="1" sockets="16"/><numa><cell id="0" cpus="0,1,2,3" memory="16777216"/></numa></cpu><cputune/><devices><input type="mouse" bus="ps2"/><channel type="unix"><target type="virtio" name="ovirt-guest-agent.0"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/d116b296-9ae7-4ff3-80b4-73dc228a7b64.ovirt-guest-agent.0"/></channel><channel type="unix"><target type="virtio" name="org.qemu.guest_agent.0"/><source mode="bind" path="/var/lib/libvirt/qemu/channels/d116b296-9ae7-4ff3-80b4-73dc228a7b64.org.qemu.guest_agent.0"/></channel><sound model="ich6"><alias name="ua-05ce597b-8e43-4360-81ac-2ca13cb4f9d5"/></sound><graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us"><listen type="network" network="vdsm-ovirtmgmt"/></graphics><controller type="scsi" model="virtio-scsi" index="0"><driver iothread="1"/><alias name="ua-30edc108-3218-43dc-ad43-129ce392930e"/></controller><video><model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/><alias name="ua-31187f25-275b-490d-922b-15712b6fabb6"/></video><console type="unix"><source path="/var/run/ovirt-vmconsole-console/d116b296-9ae7-4ff3-80b4-73dc228a7b64.sock" mode="bind"/><target type="serial" port="0"/><alias name="ua-7a643a8e-6871-4d63-a38e-632f03566e63"/></console><graphics type="spice" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1"><channel name="main" mode="secure"/><channel name="inputs" mode="secure"/><channel name="cursor" mode="secure"/><channel name="playback" mode="secure"/><channel name="record" mode="secure"/><channel name="display" mode="secure"/><channel name="smartcard" mode="secure"/><channel name="usbredir" mode="secure"/><listen type="network" network="vdsm-ovirtmgmt"/></graphics><controller type="virtio-serial" index="0" ports="16"><alias name="ua-833fb61c-213a-4871-b99a-3863958ce070"/></controller><rng model="virtio"><backend model="random">/dev/urandom</backend><alias name="ua-91c1c22f-5d21-458b-b1a1-a700ea8b5e5c"/></rng><memballoon model="virtio"><stats period="5"/><alias name="ua-943f5866-0165-40bf-a4b6-658072a1d7f5"/></memballoon><controller type="usb" model="piix3-uhci" index="0"/><serial type="unix"><source path="/var/run/ovirt-vmconsole-console/d116b296-9ae7-4ff3-80b4-73dc228a7b64.sock" mode="bind"/><target port="0"/></serial><channel type="spicevmc"><target type="virtio" name="com.redhat.spice.0"/></channel><interface type="bridge"><model type="virtio"/><link state="up"/><source bridge="ovirtmgmt"/><driver queues="4" name="vhost"/><alias name="ua-03c1177f-98be-4bed-8dd0-1f1895a0a0c6"/><mac address="00:16:3e:3b:5d:da"/><mtu size="1500"/><filterref filter="vdsm-no-mac-spoofing"/><bandwidth/></interface><disk type="file" device="cdrom" snapshot="no"><driver name="qemu" type="raw" error_policy="report"/><source file="" startupPolicy="optional"><seclabel model="dac" type="none" relabel="no"/></source><target dev="hdc" bus="ide"/><readonly/><alias name="ua-acc9e0f3-ab5b-4637-876a-96242a52a470"/></disk><disk snapshot="no" type="file" device="disk"><target dev="vda" bus="virtio"/><source file="/rhev/data-center/00000000-0000-0000-0000-000000000000/b77c80b7-a2a5-4627-a48e-8b8a49583c5d/images/e070502c-780b-45a5-98d1-6f6db9a48967/e4066d0d-2a83-4802-8976-09f2a18baf23"><seclabel model="dac" type="none" relabel="no"/></source><driver name="qemu" iothread="1" io="threads" type="raw" error_policy="stop" cache="none"/><alias name="ua-e070502c-780b-45a5-98d1-6f6db9a48967"/><serial>e070502c-780b-45a5-98d1-6f6db9a48967</serial></disk><lease><key>e4066d0d-2a83-4802-8976-09f2a18baf23</key><lockspace>b77c80b7-a2a5-4627-a48e-8b8a49583c5d</lockspace><target offset="LEASE-OFFSET:e4066d0d-2a83-4802-8976-09f2a18baf23:b77c80b7-a2a5-4627-a48e-8b8a49583c5d" path="LEASE-PATH:e4066d0d-2a83-4802-8976-09f2a18baf23:b77c80b7-a2a5-4627-a48e-8b8a49583c5d"/></lease></devices><pm><suspend-to-disk enabled="no"/><suspend-to-mem enabled="no"/></pm><os><type arch="x86_64" machine="pc-i440fx-rhel7.6.0">hvm</type><smbios mode="sysinfo"/><bios useserial="yes"/></os><metadata><ovirt-tune:qos/><ovirt-vm:vm><ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb><ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion><ovirt-vm:custom/><ovirt-vm:device mac_address="00:16:3e:3b:5d:da"><ovirt-vm:custom/></ovirt-vm:device><ovirt-vm:device devtype="disk" name="vda"><ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID><ovirt-vm:volumeID>e4066d0d-2a83-4802-8976-09f2a18baf23</ovirt-vm:volumeID><ovirt-vm:shared>exclusive</ovirt-vm:shared><ovirt-vm:imageID>e070502c-780b-45a5-98d1-6f6db9a48967</ovirt-vm:imageID><ovirt-vm:domainID>b77c80b7-a2a5-4627-a48e-8b8a49583c5d</ovirt-vm:domainID></ovirt-vm:device><ovirt-vm:launchPaused>false</ovirt-vm:launchPaused><ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior></ovirt-vm:vm></metadata></domain>', 'smpCoresPerSocket': '4', 'kvmEnable': 'true', 'bootMenuEnable': 'false', 'devices': [], 'custom': {}, 'maxVCpus': '64', 'statusTime': '4357530330', 'vmName': 'HostedEngine', 'maxMemSlots': 16}} from=::1,40284, vmId=d116b296-9ae7-4ff3-80b4-73dc228a7b64 (api:54) [...] <feature name="virt-ssbd" policy="require"/> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< There is sets "<feature name="virt-ssbd" policy="require"/>", which is added in the xml, ever since it will break. Ayee. -- Christian Reiss - email@christian-reiss.de /"\ ASCII Ribbon support@alpha-labs.net \ / Campaign X against HTML WEB alpha-labs.net / \ in eMails GPG Retrieval https://gpg.christian-reiss.de GPG ID ABCD43C5, 0x44E29126ABCD43C5 GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5 "It's better to reign in hell than to serve in heaven.", John Milton, Paradise lost.

At least you're not alone: https://bugzilla.redhat.com/show_bug.cgi?id=1745181 Now since that Epyc is useless: Do you want to swap for a J5005? (tschuldigung... konnte mich wieder nicht bremsen) All that code is Python somewhere, so you can find who adds that tag and suppress it until they fix it, just depends on how desperate you are I guess. In any case you may want to put a comment on the bug, so perhaps it gets more attention.
participants (2)
-
Christian Reiss
-
thomas@hoberg.net