Ovirt Host Crashed
by Darin Schmidt
I am running this as an all in one system for a test bed at home. The system crashed which lead me to have to reinstall the OS (CentOS 8) and I imported the data stores but I cannot find any way to import the VM's that were in the DATA store. I havent had a chance to backup/export the VMs. I havent been able to find anything in the documentation on how to import these VMs. Any suggestions or links to what Im looking for?
I had to create a new DATA store as the option for importing a local data store wasnt an option, I assume its because the host was down. THen I imported the old data store.
4 years, 3 months
Mellanox OFED with oVirt
by Vinícius Ferrão
Hello,
Anyone had success using Mellanox OFED with oVirt? Already learned some things:
1. I can’t use oVirt Node.
2. Mellanox OFED cannot be installed with mlnx-ofed-all since it breaks dnf. We need to rely on the upstream RDMA implementation.
3. The way to go is running: dnf install mlnx-ofed-dpdk-upstream-libs
But after the installation I ended up with broken dnf:
[root@c4140 ~]# dnf update
Updating Subscription Management repositories.
Last metadata expiration check: 0:03:54 ago on Tue 01 Sep 2020 11:52:41 PM -03.
Error:
Problem: both package mlnx-ofed-all-user-only-5.1-0.6.6.0.rhel8.2.noarch and mlnx-ofed-all-5.1-0.6.6.0.rhel8.2.noarch obsolete glusterfs-rdma
- cannot install the best update candidate for package glusterfs-rdma-6.0-37.el8.x86_64
- package ovirt-host-4.4.1-4.el8ev.x86_64 requires glusterfs-rdma, but none of the providers can be installed
- package mlnx-ofed-all-5.1-0.6.6.0.rhel8.2.noarch obsoletes glusterfs-rdma provided by glusterfs-rdma-6.0-37.el8.x86_64
- package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 3.12.2-40.2.el8, but none of the providers can be installed
- package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 6.0-15.el8, but none of the providers can be installed
- package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 6.0-20.el8, but none of the providers can be installed
- cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and glusterfs-6.0-37.el8.x86_64
- cannot install both glusterfs-6.0-15.el8.x86_64 and glusterfs-6.0-37.el8.x86_64
- cannot install both glusterfs-6.0-20.el8.x86_64 and glusterfs-6.0-37.el8.x86_64
- cannot install the best update candidate for package ovirt-host-4.4.1-4.el8ev.x86_64
- cannot install the best update candidate for package glusterfs-6.0-37.el8.x86_64
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
That are the packages installed:
[root@c4140 ~]# rpm -qa *mlnx*
mlnx-dpdk-19.11.0-1.51066.x86_64
mlnx-ofa_kernel-devel-5.1-OFED.5.1.0.6.6.1.rhel8u2.x86_64
mlnx-ethtool-5.4-1.51066.x86_64
mlnx-dpdk-devel-19.11.0-1.51066.x86_64
mlnx-ofa_kernel-5.1-OFED.5.1.0.6.6.1.rhel8u2.x86_64
mlnx-dpdk-doc-19.11.0-1.51066.noarch
mlnx-dpdk-tools-19.11.0-1.51066.x86_64
mlnx-ofed-dpdk-upstream-libs-5.1-0.6.6.0.rhel8.2.noarch
kmod-mlnx-ofa_kernel-5.1-OFED.5.1.0.6.6.1.rhel8u2.x86_64
mlnx-iproute2-5.6.0-1.51066.x86_64
And finally this is the repo that I’m using:
[root@c4140 ~]# cat /etc/yum.repos.d/mellanox_mlnx_ofed.repo
#
# Mellanox Technologies Ltd. public repository configuration file.
# For more information, refer to http://linux.mellanox.com
#
[mlnx_ofed_latest_base]
name=Mellanox Technologies rhel8.2-$basearch mlnx_ofed latest
baseurl=http://linux.mellanox.com/public/repo/mlnx_ofed/latest/rhel8.2/$b...
enabled=1
gpgkey=http://www.mellanox.com/downloads/ofed/RPM-GPG-KEY-Mellanox
gpgcheck=1
So anyone had success with this?
4 years, 3 months
How can Gluster be a HCI default, when it's hardly ever working?
by thomas@hoberg.net
I've just tried to verify what you said here.
As a base line I started with the 1nHCI Gluster setup. From four VMs, two legacy, two Q35 on the single node Gluster, one survived the import, one failed silently with an empty disk, two failed somewhere in the middle of qemu-img trying to write the image to the Gluster storage. For each of those two, this always happened at the same block number, a unique one per machine, not in random places, as if qemu-img reading and writing the very same image could not agree. That's two types of error and a 75% failure rate
I created another domain, basically using an NFS automount export from one of the HCI nodes (a 4.3 node serving as 4.4 storage) and imported the very same VMs (source all 4.3) transported via a re-attached export domain to 4.4. Three of the for imports worked fine, no error with qemu-img writing to NFS. All VMs had full disk images and launched, which verified that there is nothing wrong with the exports at least.
But there was still one, that failed with the same qemu-img error.
I then tried to move the disks from NFS to Gluster, which internally is also done via qemu-img, and I had those fail every time.
Gluster or HCI seems a bit of a Russian roulette for migrations, and I am wondering how much it is better for normal operations.
I'm still going to try moving via a backup domain (on NFS) and moving between that and Gluster, to see if it makes any difference.
I really haven't done a lot of stress testing yet with oVirt, but this experience doesn't build confidence.
4 years, 3 months
Problem installing Windows VM on 4.4.1
by fgarat@gmail.com
Hi,
I'm having problem after I upgraded to 4.4.1 with Windows machines.
The installation sees no disk. Even IDE disk doesn't get detected and installation won't move forward no matter what driver i use for the disk.
Any one else having this issue?.
Regards,
Facundo
4 years, 3 months
How can you avoid breaking 4.3.11 legacy VMs imported in 4.4.1 during a migration?
by thomas@hoberg.net
Testing the 4.3 to 4.4 migration... what I describe here is facts is mostly observations and conjecture, could be wrong, just makes writing easier...
While 4.3 seems to maintain a default emulated machine type (pc-i440fx-rhel7.6.0 by default), it doesn't actually allow setting it in the cluster settings: Could be built-in, could be inherited from the default template... Most of my VMs were created with the default on 4.3.
oVirt 4.4 presets that to pc-q35-rhel8.1.0 and that has implications:
1. Any VM imported from an export on a 4.3 farm, will get upgraded to Q35, which unfortunately breaks things, e.g. network adapters getting renamed as the first issue I stumbled on some Debian machines
2. If you try to compensate by lowering the cluster default from Q35 to pc-i44fx the hosted-engine will fail, because it was either built or came as Q35 and can no longer find critical devices: It evidently doesn't take/use the VM configuration data it had at the last shutdown, but seems to re-generate it according to some obscure logic, which fails here.
I've tried creating a bit of backward compatibility by creating another template based on pc-i440fx, but at the time of the import, I cannot switch the template.
If I try to downgrade the cluster, the hosted-engine will fail to start and I can't change the template of the hosted-engine to something Q35.
Currently this leaves me in a position where I can't separate the move of VMs from 4.3 to 4.4 and the upgrade of the virtual hardware, which is a different mess for every OS in the mix of VMs.
Recommendations, tips anyone?
P.S. A hypervisor reconstructing the virtual hardware from anywhere but storage at every launch, is difficult to trust IMHO.
4 years, 3 months
Upgrade Doc typo.
by carl langlois
Hi,
Not sure if this is relevant but in the ovirt doc in the 4.2 to 4.3 upgrade
the repo rpm is specified with 4,4 rpm .
Thanks
Carl
[image: image.png]
4 years, 3 months