
Hello folks I realise this probably isn't the place for this but someone might be interested or have some knowledge. I deployed the KVM version of HPE Oneview 8.8 to oVirt 4.5 (OLVM 4.5) It came as a single QCOW2 disk image. The VM I created needed a couple of tweaks to get it to boot (Chipset: i440FX w/Bios for the IDE disk type, OS: RHEL7 x86_64.) The VM boots and the graphical environment starts but sticks not long after (left over weekend, no change.) I did boot into the rescue environment to look at logs, I can see all the logical volumes mounted ok so no issue there, I see apps repeatedly trying to start with no obvious reason. Very similar outcome when I tried the previous 8.7 Oneview version. We also run a VMWare estate so the sad thing is that I've fallen back to deploying the ESXi version of Oneview 8.8 - it works fine, which is a shame as I am trying to persuade our internal teams to start using oVirt/OLVM instead of VMWare! Any feedback gladly received. Thanks Angus

On Wed, Apr 10, 2024 at 11:47 AM Angus Clarke <angus@ajct.uk> wrote:
Hello folks
I realise this probably isn't the place for this but someone might be interested or have some knowledge.
I deployed the KVM version of HPE Oneview 8.8 to oVirt 4.5 (OLVM 4.5) It came as a single QCOW2 disk image.
Is the image download publicly available? Or does it need any form of subscription ? Gianluca

Hi Gianluca The software is free from HPE but requires a login, I've shared a link separately. Thanks for taking an interest Regards Angus ---- On Wed, 10 Apr 2024 11:56:54 +0200 Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote --- On Wed, Apr 10, 2024 at 11:47 AM Angus Clarke <mailto:angus@ajct.uk> wrote: Hello folks I realise this probably isn't the place for this but someone might be interested or have some knowledge. I deployed the KVM version of HPE Oneview 8.8 to oVirt 4.5 (OLVM 4.5) It came as a single QCOW2 disk image. Is the image download publicly available? Or does it need any form of subscription ? Gianluca

On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke <angus@ajct.uk> wrote:
Hi Gianluca
The software is free from HPE but requires a login, I've shared a link separately.
Thanks for taking an interest
Regards Angus
Apart from other considerations we are privately sharing, in my env that is based on Cascade Lake cpu on the host, with local storage domain on filesystem, the appliance is able to boot and complete the initial configuration phase using your settings: Chipset i440FX w/Bios for the IDE disk type, OS: RHEL7 x86_64. In my env graphics protocol=VNC, video type=VGA The constraint for your tweaks is caused by the appliance's operating system where all the virtio modules are compiled as modules and they are not included into the initramfs. So the system doesn't find the boot disk if you set it as virtio or virtio-scsi. The layout is of bios type with one partition for /boot and other filesystems on LVM, / included. To modify the qcow2 image you can use some tools out there, or use manual steps this way: . connect the disk to an existing rhel 7 / CentOS 7 helper VM where you have lvm2 package installed In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when added is then seen as /dev/sdb and its partitions as /dev/sdb1, ... IMPORTANT: change the disk names below as it appears the appliance disk in your env, otherwise you risk to compromise your existing data!!! IMPORTANT: inside the appliance disk there is a volume group named vg01. Verify there is no vg01 volume group already defined in your helper VM otherwise you get into troubles . connect to the helper VM as root user . the LVM structure of the added disk (PV/VG/LV) should be automatically detected run the command "vgs" and you should see vg01 volume group listed run the command "lvs vg01" and you should see some logical volumes listed . mount the root filesystem of the appliance disk on a directory in your helper VM (on /media directory in my case) # mount /dev/vg01/lv_root /media/ . mount the /boot filesystem of the appliance disk under /media/boot # mount /dev/sdb1 /media/boot/ . mount the /var filesystem of the appliance disk under /media/var # mount /dev/vg01/lv_var /media/var/ . chroot into the appliance disk env # chroot /media . create a file with new kernel driver modules you want to include in the new initramfs # vi /etc/dracut.conf.d/virtio.conf its contents have to be this one line below (similar to the already present platform.conf): # cat /etc/dracut.conf.d/virtio.conf add_drivers+="virtio virtio_blk virtio_scsi" . backup the original initramfs # cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak . replace the initramfs # dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 3.10.0-1062.1.2.el7.x86_64 ... *** Creating image file done *** *** Creating initramfs image file '/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done *** # . verify the new contents include virtio modules # lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio -rw-r--r-- 1 root root 7876 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz -rw-r--r-- 1 root root 12972 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz -rw-r--r-- 1 root root 14304 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/net/virtio_net.ko.xz -rw-r--r-- 1 root root 8188 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko.xz drwxr-xr-x 2 root root 0 Apr 10 21:14 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio -rw-r--r-- 1 root root 4552 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio.ko.xz -rw-r--r-- 1 root root 9904 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_pci.ko.xz -rw-r--r-- 1 root root 8332 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_ring.ko.xz . exit the chroot environment # exit . Now you exited from the chroot env, umount the appliance disk filesystems # umount /media/var /media/boot # umount /media . disconnect the disk from the helper VM . create a Red Hat 7.x VM in your oVirt/OLVM env as Q35 / Bios VM with the appliance disk configured as virtio or virtio-scsi disk . boot the VM and it should work, apart from the current problem of the display in your env Eventually if it boots ok and at the end it works, push HPE to add virtio modules that are quite the standard for disk in Qemu/KVM based env. The virtio network starts already ok because it is activated after boot as a module and it is not needed in the initrd phase but only after it. Gianluca

Hi Gianluca Thank you for the detailed instructions - these were excellent, I wasn't aware of the "lsinitrd" command before now - thanks! My VM still sticks at the same point when booting with the virtio-scsi configuration. Meh! I'm encouraged that the image booted ok in your environment => points to something specific to my environment. I've raised a case with Oracle as we are using OLVM. I don't think they'll take an interest, let's see. If I get anywhere I'll report back here for the record. Thanks again Angus ---- On Wed, 10 Apr 2024 23:59:22 +0200 Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote --- On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke <mailto:angus@ajct.uk> wrote: Hi Gianluca The software is free from HPE but requires a login, I've shared a link separately. Thanks for taking an interest Regards Angus Apart from other considerations we are privately sharing, in my env that is based on Cascade Lake cpu on the host, with local storage domain on filesystem, the appliance is able to boot and complete the initial configuration phase using your settings: Chipset i440FX w/Bios for the IDE disk type, OS: RHEL7 x86_64. In my env graphics protocol=VNC, video type=VGA The constraint for your tweaks is caused by the appliance's operating system where all the virtio modules are compiled as modules and they are not included into the initramfs. So the system doesn't find the boot disk if you set it as virtio or virtio-scsi. The layout is of bios type with one partition for /boot and other filesystems on LVM, / included. To modify the qcow2 image you can use some tools out there, or use manual steps this way: . connect the disk to an existing rhel 7 / CentOS 7 helper VM where you have lvm2 package installed In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when added is then seen as /dev/sdb and its partitions as /dev/sdb1, ... IMPORTANT: change the disk names below as it appears the appliance disk in your env, otherwise you risk to compromise your existing data!!! IMPORTANT: inside the appliance disk there is a volume group named vg01. Verify there is no vg01 volume group already defined in your helper VM otherwise you get into troubles . connect to the helper VM as root user . the LVM structure of the added disk (PV/VG/LV) should be automatically detected run the command "vgs" and you should see vg01 volume group listed run the command "lvs vg01" and you should see some logical volumes listed . mount the root filesystem of the appliance disk on a directory in your helper VM (on /media directory in my case) # mount /dev/vg01/lv_root /media/ . mount the /boot filesystem of the appliance disk under /media/boot # mount /dev/sdb1 /media/boot/ . mount the /var filesystem of the appliance disk under /media/var # mount /dev/vg01/lv_var /media/var/ . chroot into the appliance disk env # chroot /media . create a file with new kernel driver modules you want to include in the new initramfs # vi /etc/dracut.conf.d/virtio.conf its contents have to be this one line below (similar to the already present platform.conf): # cat /etc/dracut.conf.d/virtio.conf add_drivers+="virtio virtio_blk virtio_scsi" . backup the original initramfs # cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak . replace the initramfs # dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 3.10.0-1062.1.2.el7.x86_64 ... *** Creating image file done *** *** Creating initramfs image file '/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done *** # . verify the new contents include virtio modules # lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio -rw-r--r-- 1 root root 7876 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz -rw-r--r-- 1 root root 12972 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz -rw-r--r-- 1 root root 14304 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/net/virtio_net.ko.xz -rw-r--r-- 1 root root 8188 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko.xz drwxr-xr-x 2 root root 0 Apr 10 21:14 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio -rw-r--r-- 1 root root 4552 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio.ko.xz -rw-r--r-- 1 root root 9904 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_pci.ko.xz -rw-r--r-- 1 root root 8332 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_ring.ko.xz . exit the chroot environment # exit . Now you exited from the chroot env, umount the appliance disk filesystems # umount /media/var /media/boot # umount /media . disconnect the disk from the helper VM . create a Red Hat 7.x VM in your oVirt/OLVM env as Q35 / Bios VM with the appliance disk configured as virtio or virtio-scsi disk . boot the VM and it should work, apart from the current problem of the display in your env Eventually if it boots ok and at the end it works, push HPE to add virtio modules that are quite the standard for disk in Qemu/KVM based env. The virtio network starts already ok because it is activated after boot as a module and it is not needed in the initrd phase but only after it. Gianluca

Hi Angus, we could try to do our best, even if this one is an appliance coming from HPE. It could also help if you share access to the appliance to me as well as on the SR opened. And, please, share the SR number you created. Thanks Simon On Apr 11, 2024, at 2:42 PM, Angus Clarke <angus@ajct.uk> wrote: Hi Gianluca Thank you for the detailed instructions - these were excellent, I wasn't aware of the "lsinitrd" command before now - thanks! My VM still sticks at the same point when booting with the virtio-scsi configuration. Meh! I'm encouraged that the image booted ok in your environment => points to something specific to my environment. I've raised a case with Oracle as we are using OLVM. I don't think they'll take an interest, let's see. If I get anywhere I'll report back here for the record. Thanks again Angus ---- On Wed, 10 Apr 2024 23:59:22 +0200 Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote --- On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke <angus@ajct.uk<mailto:angus@ajct.uk>> wrote: Hi Gianluca The software is free from HPE but requires a login, I've shared a link separately. Thanks for taking an interest Regards Angus Apart from other considerations we are privately sharing, in my env that is based on Cascade Lake cpu on the host, with local storage domain on filesystem, the appliance is able to boot and complete the initial configuration phase using your settings: Chipset i440FX w/Bios for the IDE disk type, OS: RHEL7 x86_64. In my env graphics protocol=VNC, video type=VGA The constraint for your tweaks is caused by the appliance's operating system where all the virtio modules are compiled as modules and they are not included into the initramfs. So the system doesn't find the boot disk if you set it as virtio or virtio-scsi. The layout is of bios type with one partition for /boot and other filesystems on LVM, / included. To modify the qcow2 image you can use some tools out there, or use manual steps this way: . connect the disk to an existing rhel 7 / CentOS 7 helper VM where you have lvm2 package installed In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when added is then seen as /dev/sdb and its partitions as /dev/sdb1, ... IMPORTANT: change the disk names below as it appears the appliance disk in your env, otherwise you risk to compromise your existing data!!! IMPORTANT: inside the appliance disk there is a volume group named vg01. Verify there is no vg01 volume group already defined in your helper VM otherwise you get into troubles . connect to the helper VM as root user . the LVM structure of the added disk (PV/VG/LV) should be automatically detected run the command "vgs" and you should see vg01 volume group listed run the command "lvs vg01" and you should see some logical volumes listed . mount the root filesystem of the appliance disk on a directory in your helper VM (on /media directory in my case) # mount /dev/vg01/lv_root /media/ . mount the /boot filesystem of the appliance disk under /media/boot # mount /dev/sdb1 /media/boot/ . mount the /var filesystem of the appliance disk under /media/var # mount /dev/vg01/lv_var /media/var/ . chroot into the appliance disk env # chroot /media . create a file with new kernel driver modules you want to include in the new initramfs # vi /etc/dracut.conf.d/virtio.conf its contents have to be this one line below (similar to the already present platform.conf): # cat /etc/dracut.conf.d/virtio.conf add_drivers+="virtio virtio_blk virtio_scsi" . backup the original initramfs # cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak . replace the initramfs # dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 3.10.0-1062.1.2.el7.x86_64 ... *** Creating image file done *** *** Creating initramfs image file '/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done *** # . verify the new contents include virtio modules # lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio -rw-r--r-- 1 root root 7876 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz -rw-r--r-- 1 root root 12972 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz -rw-r--r-- 1 root root 14304 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/net/virtio_net.ko.xz -rw-r--r-- 1 root root 8188 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko.xz drwxr-xr-x 2 root root 0 Apr 10 21:14 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio -rw-r--r-- 1 root root 4552 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio.ko.xz -rw-r--r-- 1 root root 9904 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_pci.ko.xz -rw-r--r-- 1 root root 8332 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_ring.ko.xz . exit the chroot environment # exit . Now you exited from the chroot env, umount the appliance disk filesystems # umount /media/var /media/boot # umount /media . disconnect the disk from the helper VM . create a Red Hat 7.x VM in your oVirt/OLVM env as Q35 / Bios VM with the appliance disk configured as virtio or virtio-scsi disk . boot the VM and it should work, apart from the current problem of the display in your env Eventually if it boots ok and at the end it works, push HPE to add virtio modules that are quite the standard for disk in Qemu/KVM based env. The virtio network starts already ok because it is activated after boot as a module and it is not needed in the initrd phase but only after it. Gianluca _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://urldefense.com/v3/__https://www.ovirt.org/privacy-policy.html__;!!AC... oVirt Code of Conduct: https://urldefense.com/v3/__https://www.ovirt.org/community/about/community-... List Archives: https://urldefense.com/v3/__https://lists.ovirt.org/archives/list/users@ovir...

Done, thanks Simon 👍 ---- On Thu, 11 Apr 2024 15:10:48 +0200 Simon Coter <simon.coter@oracle.com> wrote --- Hi Angus, we could try to do our best, even if this one is an appliance coming from HPE. It could also help if you share access to the appliance to me as well as on the SR opened. And, please, share the SR number you created. Thanks Simon On Apr 11, 2024, at 2:42 PM, Angus Clarke <mailto:angus@ajct.uk> wrote: Hi Gianluca Thank you for the detailed instructions - these were excellent, I wasn't aware of the "lsinitrd" command before now - thanks! My VM still sticks at the same point when booting with the virtio-scsi configuration. Meh! I'm encouraged that the image booted ok in your environment => points to something specific to my environment. I've raised a case with Oracle as we are using OLVM. I don't think they'll take an interest, let's see. If I get anywhere I'll report back here for the record. Thanks again Angus ---- On Wed, 10 Apr 2024 23:59:22 +0200 Gianluca Cecchi <mailto:gianluca.cecchi@gmail.com> wrote --- On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke <mailto:angus@ajct.uk> wrote: Hi Gianluca The software is free from HPE but requires a login, I've shared a link separately. Thanks for taking an interest Regards Angus Apart from other considerations we are privately sharing, in my env that is based on Cascade Lake cpu on the host, with local storage domain on filesystem, the appliance is able to boot and complete the initial configuration phase using your settings: Chipset i440FX w/Bios for the IDE disk type, OS: RHEL7 x86_64. In my env graphics protocol=VNC, video type=VGA The constraint for your tweaks is caused by the appliance's operating system where all the virtio modules are compiled as modules and they are not included into the initramfs. So the system doesn't find the boot disk if you set it as virtio or virtio-scsi. The layout is of bios type with one partition for /boot and other filesystems on LVM, / included. To modify the qcow2 image you can use some tools out there, or use manual steps this way: . connect the disk to an existing rhel 7 / CentOS 7 helper VM where you have lvm2 package installed In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when added is then seen as /dev/sdb and its partitions as /dev/sdb1, ... IMPORTANT: change the disk names below as it appears the appliance disk in your env, otherwise you risk to compromise your existing data!!! IMPORTANT: inside the appliance disk there is a volume group named vg01. Verify there is no vg01 volume group already defined in your helper VM otherwise you get into troubles . connect to the helper VM as root user . the LVM structure of the added disk (PV/VG/LV) should be automatically detected run the command "vgs" and you should see vg01 volume group listed run the command "lvs vg01" and you should see some logical volumes listed . mount the root filesystem of the appliance disk on a directory in your helper VM (on /media directory in my case) # mount /dev/vg01/lv_root /media/ . mount the /boot filesystem of the appliance disk under /media/boot # mount /dev/sdb1 /media/boot/ . mount the /var filesystem of the appliance disk under /media/var # mount /dev/vg01/lv_var /media/var/ . chroot into the appliance disk env # chroot /media . create a file with new kernel driver modules you want to include in the new initramfs # vi /etc/dracut.conf.d/virtio.conf its contents have to be this one line below (similar to the already present platform.conf): # cat /etc/dracut.conf.d/virtio.conf add_drivers+="virtio virtio_blk virtio_scsi" . backup the original initramfs # cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak . replace the initramfs # dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 3.10.0-1062.1.2.el7.x86_64 ... *** Creating image file done *** *** Creating initramfs image file '/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done *** # . verify the new contents include virtio modules # lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio -rw-r--r-- 1 root root 7876 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz -rw-r--r-- 1 root root 12972 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz -rw-r--r-- 1 root root 14304 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/net/virtio_net.ko.xz -rw-r--r-- 1 root root 8188 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko.xz drwxr-xr-x 2 root root 0 Apr 10 21:14 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio -rw-r--r-- 1 root root 4552 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio.ko.xz -rw-r--r-- 1 root root 9904 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_pci.ko.xz -rw-r--r-- 1 root root 8332 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_ring.ko.xz . exit the chroot environment # exit . Now you exited from the chroot env, umount the appliance disk filesystems # umount /media/var /media/boot # umount /media . disconnect the disk from the helper VM . create a Red Hat 7.x VM in your oVirt/OLVM env as Q35 / Bios VM with the appliance disk configured as virtio or virtio-scsi disk . boot the VM and it should work, apart from the current problem of the display in your env Eventually if it boots ok and at the end it works, push HPE to add virtio modules that are quite the standard for disk in Qemu/KVM based env. The virtio network starts already ok because it is activated after boot as a module and it is not needed in the initrd phase but only after it. Gianluca _______________________________________________ Users mailing list -- mailto:users@ovirt.org To unsubscribe send an email to mailto:users-leave@ovirt.org Privacy Statement: https://urldefense.com/v3/__https://www.ovirt.org/privacy-policy.html__;!!AC... oVirt Code of Conduct: https://urldefense.com/v3/__https://www.ovirt.org/community/about/community-... List Archives: https://urldefense.com/v3/__https://lists.ovirt.org/archives/list/users@ovir...

Hello again, Uploaded the QCOW2 again and created a new VM from scratch with the expected parameters -> boots fine. The failure scenario comes about when the original VM was set with Q35/UEFI with Virtio-SCSI disk type (completely fails to boot - expected) and then changed to I440FX with BIOS and change disk type to IDE. I'm not sure which change (or both) trigger the issue. This failure scenario persists if I upload a replacement QCOW2 image and attach to the modified VM, including images from previous versions of HPE Oneview. This explains why I experienced repeat failures. Overall this is probably not very interesting as this seems to revolve around IDE disk types - I'll feedback to HPE the Virtio-SCSI notes that Gianluca and Simon have mentioned. Thanks a lot Angus ---- On Thu, 11 Apr 2024 14:42:32 +0200 Angus Clarke <angus@ajct.uk> wrote --- Hi Gianluca Thank you for the detailed instructions - these were excellent, I wasn't aware of the "lsinitrd" command before now - thanks! My VM still sticks at the same point when booting with the virtio-scsi configuration. Meh! I'm encouraged that the image booted ok in your environment => points to something specific to my environment. I've raised a case with Oracle as we are using OLVM. I don't think they'll take an interest, let's see. If I get anywhere I'll report back here for the record. Thanks again Angus ---- On Wed, 10 Apr 2024 23:59:22 +0200 Gianluca Cecchi <mailto:gianluca.cecchi@gmail.com> wrote --- On Wed, Apr 10, 2024 at 12:29 PM Angus Clarke <mailto:angus@ajct.uk> wrote: Hi Gianluca The software is free from HPE but requires a login, I've shared a link separately. Thanks for taking an interest Regards Angus Apart from other considerations we are privately sharing, in my env that is based on Cascade Lake cpu on the host, with local storage domain on filesystem, the appliance is able to boot and complete the initial configuration phase using your settings: Chipset i440FX w/Bios for the IDE disk type, OS: RHEL7 x86_64. In my env graphics protocol=VNC, video type=VGA The constraint for your tweaks is caused by the appliance's operating system where all the virtio modules are compiled as modules and they are not included into the initramfs. So the system doesn't find the boot disk if you set it as virtio or virtio-scsi. The layout is of bios type with one partition for /boot and other filesystems on LVM, / included. To modify the qcow2 image you can use some tools out there, or use manual steps this way: . connect the disk to an existing rhel 7 / CentOS 7 helper VM where you have lvm2 package installed In my case my VM has one disk named /dev/sda and the HPE qcow2 disk when added is then seen as /dev/sdb and its partitions as /dev/sdb1, ... IMPORTANT: change the disk names below as it appears the appliance disk in your env, otherwise you risk to compromise your existing data!!! IMPORTANT: inside the appliance disk there is a volume group named vg01. Verify there is no vg01 volume group already defined in your helper VM otherwise you get into troubles . connect to the helper VM as root user . the LVM structure of the added disk (PV/VG/LV) should be automatically detected run the command "vgs" and you should see vg01 volume group listed run the command "lvs vg01" and you should see some logical volumes listed . mount the root filesystem of the appliance disk on a directory in your helper VM (on /media directory in my case) # mount /dev/vg01/lv_root /media/ . mount the /boot filesystem of the appliance disk under /media/boot # mount /dev/sdb1 /media/boot/ . mount the /var filesystem of the appliance disk under /media/var # mount /dev/vg01/lv_var /media/var/ . chroot into the appliance disk env # chroot /media . create a file with new kernel driver modules you want to include in the new initramfs # vi /etc/dracut.conf.d/virtio.conf its contents have to be this one line below (similar to the already present platform.conf): # cat /etc/dracut.conf.d/virtio.conf add_drivers+="virtio virtio_blk virtio_scsi" . backup the original initramfs # cp -p /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.bak . replace the initramfs # dracut -fv /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img 3.10.0-1062.1.2.el7.x86_64 ... *** Creating image file done *** *** Creating initramfs image file '/boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img' done *** # . verify the new contents include virtio modules # lsinitrd /boot/initramfs-3.10.0-1062.1.2.el7.x86_64.img | grep virtio -rw-r--r-- 1 root root 7876 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/block/virtio_blk.ko.xz -rw-r--r-- 1 root root 12972 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/char/virtio_console.ko.xz -rw-r--r-- 1 root root 14304 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/net/virtio_net.ko.xz -rw-r--r-- 1 root root 8188 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/scsi/virtio_scsi.ko.xz drwxr-xr-x 2 root root 0 Apr 10 21:14 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio -rw-r--r-- 1 root root 4552 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio.ko.xz -rw-r--r-- 1 root root 9904 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_pci.ko.xz -rw-r--r-- 1 root root 8332 Sep 30 2019 usr/lib/modules/3.10.0-1062.1.2.el7.x86_64/kernel/drivers/virtio/virtio_ring.ko.xz . exit the chroot environment # exit . Now you exited from the chroot env, umount the appliance disk filesystems # umount /media/var /media/boot # umount /media . disconnect the disk from the helper VM . create a Red Hat 7.x VM in your oVirt/OLVM env as Q35 / Bios VM with the appliance disk configured as virtio or virtio-scsi disk . boot the VM and it should work, apart from the current problem of the display in your env Eventually if it boots ok and at the end it works, push HPE to add virtio modules that are quite the standard for disk in Qemu/KVM based env. The virtio network starts already ok because it is activated after boot as a module and it is not needed in the initrd phase but only after it. Gianluca
participants (3)
-
Angus Clarke
-
Gianluca Cecchi
-
Simon Coter