I could not find if that CPU supported SSBD or not.
Log into one your nodes via console and run, "cat /proc/cpuinfo" and check the
"flags" section and see if SSBD is listed. If not, then look at your cluster
config under the "General" section and see what it has for "Cluster CPU
Type". Make sure it hasn't chosen a CPU type which it thinks has SSBD available.
I have a Xeon X5670 and it does support SSBD and there is a specific CPU type selected
named, "Intel Westmere IBRS SSBD Family".
Hope this helps.
________________________________________
From: Christian Reiss <email(a)christian-reiss.de>
Sent: Wednesday, December 11, 2019 7:55 AM
To: users(a)ovirt.org
Subject: [ovirt-users] HCL: 4.3.7: Hosted engine fails
Hey all,
Using a homogeneous ovirt-node-ng-4.3.7-0.20191121.0 freshly created
cluster using node installer I am unable to deploy the hosted engine.
Everything else worked.
In vdsm.log is a line, just after attempting to start the engine:
libvirtError: the CPU is incompatible with host CPU: Host CPU does not
provide required features: virt-ssbd
I am using AMD EPYC 7282 16-Core Processors.
I have attached
- vdsm.log (during and failing the start)
- messages (for bootup / libvirt messages)
- dmesg (grub / boot config)
- deploy.log (browser output during deployment)
- virt-capabilites (virsh -r capabilities)
I can't think -or don't know- off any other log files of interest here,
but I am more than happy to oblige.
notectl check tells me
Status: OK
Bootloader ... OK
Layer boot entries ... OK
Valid boot entries ... OK
Mount points ... OK
Separate /var ... OK
Discard is used ... OK
Basic storage ... OK
Initialized VG ... OK
Initialized Thin Pool ... OK
Initialized LVs ... OK
Thin storage ... OK
Checking available space in thinpool ... OK
Checking thinpool auto-extend ... OK
vdsmd ... OK
layers:
ovirt-node-ng-4.3.7-0.20191121.0:
ovirt-node-ng-4.3.7-0.20191121.0+1
bootloader:
default: ovirt-node-ng-4.3.7-0.20191121.0 (3.10.0-1062.4.3.el7.x86_64)
entries:
ovirt-node-ng-4.3.7-0.20191121.0 (3.10.0-1062.4.3.el7.x86_64):
index: 0
title: ovirt-node-ng-4.3.7-0.20191121.0 (3.10.0-1062.4.3.el7.x86_64)
kernel:
/boot/ovirt-node-ng-4.3.7-0.20191121.0+1/vmlinuz-3.10.0-1062.4.3.el7.x86_64
args: "ro crashkernel=auto rd.lvm.lv=onn_node01/swap
rd.lvm.lv=onn_node01/ovirt-node-ng-4.3.7-0.20191121.0+1 rhgb quiet
LANG=en_GB.UTF-8 img.bootid=ovirt-node-ng-4.3.7-0.20191121.0+1"
initrd:
/boot/ovirt-node-ng-4.3.7-0.20191121.0+1/initramfs-3.10.0-1062.4.3.el7.x86_64.img
root: /dev/onn_node01/ovirt-node-ng-4.3.7-0.20191121.0+1
current_layer: ovirt-node-ng-4.3.7-0.20191121.0+1
The odd thing is the hosted engine vm does get started during initial
configuration and works. Just when the ansible stuff is done an its
moved over to ha storage the CPU quirks start.
So far I learned that ssbd is a mitigation protection but the flag is
not in my cpu. Well, ssbd is virt-ssbd is not.
I am *starting* with ovirt. I would really, really welcome it if
recommendations would include clues on how to make it happen.
I do rtfm, but I was unable to find anything (or any solution) anywhere.
Not after 80 hours of working on this.
Thank you all.
-Chris.
--
Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon
support(a)alpha-labs.net \ / Campaign
X against HTML
WEB
alpha-labs.net / \ in eMails
GPG Retrieval
https://gpg.christian-reiss.de
GPG ID ABCD43C5, 0x44E29126ABCD43C5
GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5
"It's better to reign in hell than to serve in heaven.",
John Milton, Paradise lost.