
Hi, this seems similar to the bug: https://bugzilla.redhat.com/show_bug.cgi?id=1924972 that was triggered by a change in seabios. Sadly the final comments on the bug are marked private. At the moment it seams the change in seabios will be reverted in EL8 (on hypervisor) but it is not clear in which version will the fix land. I think somebody once told me that it should be possible to specify multiple boot disks somehow -- I guess through API? Could you give that a test? But don't take my word for it. At worst I think you could come up with a hook that fixes domain XML by adding the bootindex to all disks. Hope this helps, Tomas On Fri, Jan 07, 2022 at 01:44:08PM +0000, Strahil Nikolov via Users wrote:
Hi Vojta
My LVM version on the hypervisor is: udisks2-lvm2-2.9.0-7.el8.x86_64 llvm-compat-libs-12.0.1-4.module_el8.6.0+1041+0c503ac4.x86_64 lvm2-libs-2.03.14-2.el8.x86_64 lvm2-2.03.14-2.el8.x86_64 libblockdev-lvm-2.24-8.el8.x86_64
LVM on VM: [root@nextcloud ~]# rpm -qa | grep lvm lvm2-libs-2.03.12-10.el8.x86_64 lvm2-2.03.12-10.el8.x86_64
VM disk layout is:
[root@nextcloud ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 5G 0 disk ├─sda1 8:1 0 500M 0 part │ └─md127 9:127 0 2G 0 raid0 /boot └─sda2 8:2 0 4,5G 0 part └─nextcloud--system-root 253:0 0 10G 0 lvm / sdb 8:16 0 5G 0 disk ├─sdb1 8:17 0 500M 0 part │ └─md127 9:127 0 2G 0 raid0 /boot └─sdb2 8:18 0 4,5G 0 part └─nextcloud--system-root 253:0 0 10G 0 lvm / sdc 8:32 0 5G 0 disk ├─sdc1 8:33 0 500M 0 part │ └─md127 9:127 0 2G 0 raid0 /boot └─sdc2 8:34 0 4,5G 0 part └─nextcloud--system-root 253:0 0 10G 0 lvm / sdd 8:48 0 5G 0 disk ├─sdd1 8:49 0 500M 0 part │ └─md127 9:127 0 2G 0 raid0 /boot └─sdd2 8:50 0 4,5G 0 part └─nextcloud--system-root 253:0 0 10G 0 lvm / sde 8:64 0 1G 0 disk └─nextcloud--db-db 253:2 0 4G 0 lvm /var/lib/mysql sdf 8:80 0 1G 0 disk └─nextcloud--db-db 253:2 0 4G 0 lvm /var/lib/mysql sdg 8:96 0 1G 0 disk └─nextcloud--db-db 253:2 0 4G 0 lvm /var/lib/mysql sdh 8:112 0 1G 0 disk └─nextcloud--db-db 253:2 0 4G 0 lvm /var/lib/mysql sdi 8:128 0 300G 0 disk └─data-slow 253:1 0 600G 0 lvm /var/www/html/nextcloud/data sdj 8:144 0 300G 0 disk └─data-slow 253:1 0 600G 0 lvm /var/www/html/nextcloud/data sr0
I have managed to start my VM as follows: 1. Start the VM 2. Dump the VM xml 3. Destroy 4. Add "<boot order="X" /> to every disk entry in the dump file 5. virsh define VM.xml 6. virsh start VM
Sadly I can't set more than 1 disk as bootable in oVirt.
Best Regards, Strahil Nikolov В петък, 7 януари 2022 г., 13:38:59 Гринуич+2, Vojtech Juranek <vjuranek@redhat.com> написа:
Hi,
Hi All, I recently migrated from 4.3.10 to 4.4.9 and it seems that booting from software raid0 (I have multiple gluster volumes) is not possible with Cluster compatibility 4.6 . I've tested creating a fresh VM and it also suffers the problem. Changing various options (virtio-scsi to virtio, chipset, VM type) did not help . Booting from rescue media shows that the data is still there, but grub always drops to rescue. Any hints are welcome. Host: CentOS Stream 8 with qemu-6.0.0oVirt 4.4.9 (latest)VM OS: RHEL7.9/RHEL8.5 Best Regards,Strahil Nikolov
What is the lvm version? There was recently issue [1] with specific lvm version (lvm2-2.03.14-1.el8.x86_64) which could cause boot failures.
Vojta
[1] https://bugzilla.redhat.com/show_bug.cgi?id=2026370
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/R2NI7T7QUD7MMI...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NVJQXU2VKABK7X...
-- Tomáš Golembiovský <tgolembi@redhat.com>