Hi
Am Sa., 2. Nov. 2019 um 08:51 Uhr schrieb Strahil Nikolov
<hunter86_bg(a)yahoo.com>:
Have you tried with another ISO ?
This is quite weird, still
better than my initial testing but here it
goes. I've slimmed down my attempts to Q35 and UEFI (both on and off):
- Server 2019 ISO boots into installer with or without SecureBoot
enabled (I clearly remember it didn't work in any combination I've
tried, but now it did, anyhow). The OS installes and boots with
SecureBoot enabled
- Server 2016 ISO fails to boot right at the windows bootloader*,
however I don't care for Server 2016 that much anymore at that point
in time, and it could well be that the ISO image could be of an older
revision
(MS does release updated images from time to time)
- Debian 10 boots of the CD, however it doesn't seem like NVRAM
changes are saved in oVirt / RHV guests yet. So at boot the OS doesn't
start and you have to once boot from file (EFI disk -> EFI -> debian
-> shimx64.efi)
After the boot you you have to copy all (or only shimx64.efi?) to
/boot/efi/EFI/BOOT as BOOTX64.EFI like Windows. Debian 10 has a
Microsoft-signed shim loader so SecureBoot actually works.
- Ubuntu 18.04 boots of the disc, installs and boots since their
installer puts a copy of shimx64.efi into /boot/efi/EFI/BOOT which is
where OVMF looks for a loader
I'm I correctly guessing based on looking at the "VM devices" list per
VM that oVirt/RHV doesn't yet provide a way to provide a persistent
NVRAM image to guests?
Sso for the time being we're actually stuck on Linux systems to have a
EFI/BOOT/BOOTX64.EFI present? Is this the issue you wrote about?
(I've encountered this very same issue on plain KVM and Proxmox, in
both cases a small disk image is required per VM to make the content
of the NVRAM persistent across VM reboots)
Regards
Mathieu
* See screenshot uploaded here:
https://imgur.com/a/ZsnbCOM