I have a somewhat similar issue because I use J5005 based Atom boxes for oVirt in the
home-lab, but these fail strangely during installation and are just hair-tairingly slow
during installation.
So I move to a Kaby-Lake desktop for the installation and then need to downgrade all the
way to Nehalem (no IBRS SSBD MDS etc.) to enable live migration to Gemini-Lake. I then
down the installation node, move the SSD to the first Atom, reboot and voilà, it all
works...
...but only as long as they don't push the KVM and oVirt baseline up beyond Nehalem.
Now with AMD, that platform is rapidly evolving so all layers in this oVirt stack need to
be aligned, which could take a while. There is a definite operational advantage to using
older hardware in this space.
Since there is no guarantee that the oVirt node image and the hosted-engine image are
aligned, I'd recommend disabling all mitigations during the host's boot (only got
a list of the Intel flags, sorry: Not rich enough for EPYC) and see if that sails through.
And if you have no mitigation risk issues, to keep the base CPU definition as low as you
can stand (your VMs applications could miss out on some nice instruction extensions or
other features if you go rock-bottom).
Most of the KVM config is generated at run-time with lots of Python stuff deep inside
oVirt, so really apart from working with the boot flags (or another temporary host) I see
no alternative.
BTW I also had to fiddle with net.ifnames=0 to reenable ethX Ethernet naming, because
otherwise the overlay network encodes the "new device" names into the config,
which derails the hardware swap after the initial setup.
I run with a CentOS base, because most of the workloads are actually Docker/podman
containers and oVirt is more of a side show for now. And while I update frequently, I
disable all mitigations for lack of exposure and to not slow these poor Atoms any further.
I use them for 24x7 functional testing not for crunching numbers. With 32GB of RAM and a
1TB SSD they are just big enough for that at 10Watts/unit and passive cooling.
Corporate labs has kick-ass Xeon-SP and Nvidia V100s, still mostly Docker because GPUs in
KVM and oVirt are tricks I still need to master. Looking forward to the integrated
container/VM future RH is planning there.
Viel Glück!