On Fri, May 20, 2022 at 02:07:02AM -0000, msjang(a)kisti.re.kr wrote:
Hi, all.
I currently operates several VMs on cockpit-machines, and I want to move those VMs to
oVirt.
When I tried oVirt 4.5 last month, installation failed and I tried oVirt 4.3.10, it
works. I moved several qcow2 disks on hyper-converged gluster and I attached them on VMs
and some disks boot and operate well, but some cannot boot due to XFS issues -
https://access.redhat.com/solutions/4582401.
So, I installed newest ovirt-engine on CentOS 8 Stream. I installed ovirt-node via
ovirt-node-ng-installer-4.5.0-2022051313.el8.iso. I applied vdsm-gluster-cli patch
manually on ovirt nodes, that does not applied on ovirt-ng iso.
https://github.com/oVirt/vdsm/commit/2da10debcea0b9f2e235dc18c2d567e6aa42....
Then, I encountered the following boot error when I launched several VMs that worked on
cockpit-machines and oVirt 4.3.10.
BdsDxe: failed to load Boot0001 "UEFI Misc Device" from PciRoot
(0x0)/Pci(0x2,0x3)/Pci(0x0,0x0): Not Found
BdsDxe: No bootable option or device was found
BdsDxe: Press any key to enter the Boot Manager Menu.
If your VMs are BIOS you may want to change chipset/firmware
configuration of your cluster (or on VM level) to q35+bios as it is
q35+uefi right now.
Tomas
Is there any patch that I should apply?
Best Regards
Minseok Jang.
----
PS.
FIY, I leave my current status.
I currently running 4 machines
engine.ovirt1.int - VM
n1~n3.ovirt1.int - HP DL380 Gen9, hyper-converged gluster applied.
engine.ovirt1.int # dnf info ovirt-engine.noarch
Installed Packages
Name : ovirt-engine
Version : 4.5.0.8
n1.ovirt1.int # dnf info ovirt-release-host-node
Installed Packages
Name : ovirt-release-host-node
Version : 4.5.0.2
n1.ovirt1.int # lscpu
Architecture: x86_64
CPU(s): 24
On-line CPU(s) list: 0-23
Model name: Intel(R) Xeon(R) CPU E5-2643 v4 @ 3.40GHz
n1.ovirt1.int # lsblk
NAME MAJ:MIN RM SIZE RO TYPE
MOUNTPOINT
sda 8:0 0 223.6G 0 disk
...
└─sda3 8:3 0 222G 0 part
├─onn-pool00_tmeta 253:0 0 1G 0 lvm
│ └─onn-pool00-tpool 253:2 0 173G 0 lvm
...
├─onn-pool00_tdata 253:1 0 173G 0 lvm
│ └─onn-pool00-tpool 253:2 0 173G 0 lvm
...
└─onn-swap 253:4 0 4G 0 lvm
[SWAP]
sdb 8:16 0 3.5T 0 disk
├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tmeta 253:5 0 15.9G 0 lvm
│ └─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:7 0 3.5T 0 lvm
│ ├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:8 0 3.5T 1 lvm
│ └─gluster_vg_sdb-gluster_lv_vmstore1 253:10 0 3.2T 0 lvm
/gluster_bricks/vmstore1
└─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb_tdata 253:6 0 3.5T 0 lvm
└─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb-tpool 253:7 0 3.5T 0 lvm
├─gluster_vg_sdb-gluster_thinpool_gluster_vg_sdb 253:8 0 3.5T 1 lvm
└─gluster_vg_sdb-gluster_lv_vmstore1 253:10 0 3.2T 0 lvm
/gluster_bricks/vmstore1
# gluster volume info vmstore1
Volume Name: vmstore1
Type: Replicate
Status: Started
Number of Bricks: 1 x 3 = 3
Bricks:
Brick1: n1.ovirt1.int:/gluster_bricks/vmstore1/vmstore1
Brick2: n2.ovirt1.int:/gluster_bricks/vmstore1/vmstore1
Brick3: n3.ovirt1.int:/gluster_bricks/vmstore1/vmstore1
...
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IC5FIL6CRJ6...
--
Tomáš Golembiovský <tgolembi(a)redhat.com>