After failed upgrade from 4.5.1 to 4.5.3, upgrades do not show up anymore
by Gianluca Amato
Hi all,
I recently tried to upgrade an oVirt node from 4.5.1 to 4.5.3. The upgrade failed (I have no idea idea why... how can I access the installation logs?). The node is still working fine running the old 4.5.1 release, but now the oVirt web console says that it is up to date and it does not let me retry the upgrade. However, I am still on version 4.5.1, as witnessed by the result of "nodectl info":
----
bootloader:
default: ovirt-node-ng-4.5.1-0.20220623.0 (4.18.0-394.el8.x86_64)
entries:
ovirt-node-ng-4.5.1-0.20220623.0 (4.18.0-394.el8.x86_64):
index: 0
kernel: /boot//ovirt-node-ng-4.5.1-0.20220623.0+1/vmlinuz-4.18.0-394.el8.x86_64
args: crashkernel=auto resume=/dev/mapper/onn_ovirt--clai1-swap rd.lvm.lv=onn_ovirt-clai1/ovirt-node-ng-4.5.1-0.20220623.0+1 rd.lvm.lv=onn_ovirt-clai1/swap rhgb quiet boot=UUID=9d44cf2a-38bb-477d-b542-4bfc30463d1f rootflags=discard img.bootid=ovirt-node-ng-4.5.1-0.20220623.0+1 intel_iommu=on modprobe.blacklist=nouveau transparent_hugepage=never hugepagesz=1G hugepages=256 default_hugepagesz=1G
root: /dev/onn_ovirt-clai1/ovirt-node-ng-4.5.1-0.20220623.0+1
initrd: /boot//ovirt-node-ng-4.5.1-0.20220623.0+1/initramfs-4.18.0-394.el8.x86_64.img
title: ovirt-node-ng-4.5.1-0.20220623.0 (4.18.0-394.el8.x86_64)
blsid: ovirt-node-ng-4.5.1-0.20220623.0+1-4.18.0-394.el8.x86_64
layers:
ovirt-node-ng-4.5.1-0.20220623.0:
ovirt-node-ng-4.5.1-0.20220623.0+1
current_layer: ovirt-node-ng-4.5.1-0.20220623.0+1
------
Comparing the situation with other hosts in the same data center, it seems that the problem is that the package ovirt-node-ng-image-update-placeholder is not installed anymore, hence "dnf upgrade" has nothing to do. My idea was to manually download and install the 4.5.1 version of ovirt-node-ng-image-update-placeholder and attempt installation again.
Is it a correct way to proceed ?
Thanks for any help.
--gianluca amato
1 year, 11 months
el9 official use?
by Nathanaël Blanchet
Hello,
Until 4.5.4, el9 ovirt-node was for testing. Following 4.5.4 releases note, it seems to be officially supported but there is no information about el9 engine support.
What about it?
1 year, 11 months
Max network performance on w2019 guest
by Gianluca Cecchi
One customer sees 2,5gbs on 10gbs adapters for w2019 VM with virtio using
iperf3.
Instead with Oracle linux 8 VMS of the same infra sees 9gbs.
What is the expected maximum on windows with virtio based on experience?
Thanks
Gianluca
1 year, 11 months
State sync
by KSNull Zero
Hello!
Is there any way to "sync" current host/datastore/vm state after Engine restore from backup ?
For example - let's say we have a backup made 3 hour later from current time and in this 3 hours we made some changes to vm (change config/power state/take snapshots for example) or hosts (enter maintenace mode, network changes and so on).
If we restore backup we did not see those changes because all of the info is stored in the engine database.
So the question - is there any way to get "synced" with current actual infrastructure/vm state ?
Thank you.
1 year, 11 months
oVirt Update Errors
by Matthew J Black
Hi Guys,
Attempting to do a Cluster update via the oVirt GUI and I'm getting the following errors (taken from the logs) which I've confirmed via a straight `dnf update`:
Problem 1: package ovirt-hosted-engine-setup-2.6.6-1.el8.noarch conflicts with ansible-core >= 2.13 provided by ansible-core-2.13.3-1.el8.x86_64
- cannot install the best update candidate for package ovirt-hosted-engine-setup-2.6.6-1.el8.noarch
- cannot install the best update candidate for package ansible-core-2.12.7-1.el8.x86_64
Problem 2: problem with installed package ovirt-hosted-engine-setup-2.6.6-1.el8.noarch
- package ovirt-hosted-engine-setup-2.6.6-1.el8.noarch conflicts with ansible-core >= 2.13 provided by ansible-core-2.13.3-1.el8.x86_64
- package ovirt-ansible-collection-3.0.0-1.el8.noarch requires ansible-core >= 2.13.0, but none of the providers can be installed
- cannot install the best update candidate for package ovirt-ansible-collection-2.3.0-1.el8.noarch
Is it OK to do a `dnf update --nobest` or a `dnf update --allowerasing` on each host, or is there some other solution that I'm missing?
Cheers
Dulux-Oz
1 year, 11 months
Nvidia A10 vGPU support on oVirt 4.5.2
by Don Dupuis
Hello
I can run an Nvidia GRID T4 on oVirt 4.5.2 with no issue but I have a new
GRID A10 that doesn't seem to work in oVirt 4.5.2. This new card seems to
use SRIOV instead of mediated devices or it seems that I only get the
mdev_supported_types directory structure when I run the
/usr/lib/nvidia/sriov-manager command. Has anyone got this card working on
oVirt or has developers working on oVirt/RHV know about this?
Thanks
Don
1 year, 11 months
oVirt/Ceph iSCSI Issues
by Matthew J Black
Hi All,
I've got some issues with connecting my oVirt Cluster to my Ceph Cluster via iSCSI. There are two issues, and I don't know if one is causing the other, if they are related at all, or if they are two separate, unrelated issues. Let me explain.
The Situation
-------------
- I have a working three node Ceph Cluster (Ceph Quincy on Rocky Linux 8.6)
- The Ceph Cluster has four Storage Pools of between 4 and 8 TB each
- The Ceph Cluster has three iSCSI Gateways
- There is a single iSCSI Target on the Ceph Cluster
- The iSCSI Target has all three iSCSI Gateways attached
- The iSCSI Target has all four Storage Pools attached
- The four Storage Pools have been assigned LUNs 0-3
- I have set up (Discovery) CHAP Authorisation on the iSCSI Target
- I have a working three node self-hosted oVirt Cluster (oVirt v4.5.3 on Rocky Linux 8.6)
- The oVirt Cluster has (in addition to the hosted_storage Storage Domain) three GlusterFS Storage Domains
- I can ping all three Ceph Cluster Nodes to/from all three oVirt Hosts
- The iSCSI Target on the Ceph Cluster has all three oVirt Hosts Initiators attached
- Each Initiator has all four Ceph Storage Pools attached
- I have set up CHAP Authorisation on the iSCSI Target's Initiators
- The Ceph Cluster Admin Portal reports that all three Initiators are "logged_in"
- I have previous connected Ceph iSCSI LUNs to the oVirt Cluster successfully (as an experiment), but had to remove and re-instate them for the "final" version(?).
- The oVirt Admin Portal (ie HostedEngine) reports that Initiators are 1 & 2 (ie oVirt Hosts 1 & 2) are "logged_in" to all three iSCSI Gateways
- The oVirt Admin Portal reports that Initiator 3 (ie oVirt Host 3) is "logged_in" to iSCSI Gateways 1 & 2
- I can "force" Initiator 3 to become "logged_in" to iSCSI Gateway 3, but when I do this it is *not* persistent
- oVirt Hosts 1 & 2 can/have discovered all three iSCSI Gateways
- oVirt Hosts 1 & 2 can/have discovered all four LUNs/Targets on all three iSCSI Gateways
- oVirt Host 3 can only discover 2 of the iSCSI Gateways
- For Target/LUN 0 oVirt Host 3 can only "see" the LUN provided by iSCSI Gateway 1
- For Targets/LUNs 1-3 oVirt Host 3 can only "see" the LUNs provided by iSCSI Gateways 1 & 2
- oVirt Host 3 can *not* "see" any of the Targets/LUNs provided by iSCSI Gateway 3
- When I create a new oVirt Storage Domain for any of the four LUNs:
- I am presented with a message saying "The following LUNs are already in use..."
- I am asked to "Approve operation" via a checkbox, which I do
- As I watch the oVirt Admin Portal I can see the new iSCSI Storage Domain appear in the Storage Domain list, and then after a few minutes it is removed
- After those few minutes I am presented with this failure message: "Error while executing action New SAN Storage Domain: Network error during communication with the Host."
- I have looked in the engine.log and all I could find that was relevant (as far as I know) was this:
~~~
2022-11-28 19:59:20,506+11 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-1) [77b0c12d] Command 'CreateStorageDomainVDSCommand(HostName = ovirt_node_1.mynet.local, CreateStorageDomainVDSCommandParameters:{hostId='967301de-be9f-472a-8e66-03c24f01fa71', storageDomain='StorageDomainStatic:{name='data', id='2a14e4bd-c273-40a0-9791-6d683d145558'}', args='s0OGKR-80PH-KVPX-Fi1q-M3e4-Jsh7-gv337P'})' execution failed: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues
2022-11-28 19:59:20,507+11 ERROR [org.ovirt.engine.core.bll.storage.domain.AddSANStorageDomainCommand] (default task-1) [77b0c12d] Command 'org.ovirt.engine.core.bll.storage.domain.AddSANStorageDomainCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues (Failed with error VDS_NETWORK_ERROR and code 5022)
~~~
I cannot see/detect any "communication issue" - but then again I'm not 100% sure what I should be looking for
I have looked on-line for an answer, and apart from not being able to get past Red Hat's "wall" to see the solutions that they have, all I could find that was relevant was this: https://lists.ovirt.org/archives/list/devel@ovirt.org/thread/AVLORQNOLJHR... . If this *is* relevant then there is not enough context here for me to proceed (ie/eg *where* (which host/vm) should that command be run?).
I also found (for a previous version of oVirt) notes about modifying the Postgres DB manual to resolve a similar issue. While I am more than comfortable doing this (I've been an SQL DBA for well over 20 years) this seems like asking for trouble - at least until I hear back from the oVirt Devs that this is OK to do - and of course, I'll need the relevant commands / locations / authorisations to get into the DB.
Questions
---------
- Are the two issues (oVirt Host 3 not having a full picture of the Ceph iSCSI environment and the oVirt iSCSI Storage Domain creation failure) related?
- Do I need to "refresh" the iSCSI info on the oVirt Hosts, and if so, how do I do this?
- Do I need to "flush" the old LUNs from the oVirt Cluster, and if so, how do I do this?
- Where else should I be looking for info in the logs (& which logs)?
- Does *anyone* have any other ideas how to resolve the situation - especially when using the Ceph iSCII Gateways?
Thanks in advance
Cheers
Dulux-Oz
1 year, 11 months
Import VM via KVM. Can't see vm's.
by piotret@wp.pl
Hi
I am trying to migrate vm's from oVirt 4.3.10.4-1.el7 to oVirt 4.5.3.2-1.el8.
I use Provider KVM (via libvirt)
My problem is that I can't see vm's from old oVirt when they are shutdown.
When they are running I see but can't import because "All chosen VMs are running in the external system and therefore have been filtered. Please see log for details."
Thank you for help.
Regards
1 year, 11 months
How does ovirt handle disks across multiple iscsi LUNs
by peterd@mdg-it.com
A possibly obvious question I can't find the answer to anywhere—how does ovirt allocate VM disk images when a storage domain has multiple LUNs? Are these allocated one per LUN, so if e.g. a LUN runs out of space the disks on that LUN (only) will be unable to write? Or are these distributed across LUNs, so if a LUN fails due to storage failure etc the entire storage domain can be affected?
Many thanks in advance, Peter
1 year, 11 months