PKIX path validation failed
by Ali Gusainov
Hello experts.
Environment:
oVirt: Software Version:4.4.10.7-1.el8
OS: CentOS Linux release 8.5.2111
Symptoms:
1. At login prompt I see this:
"PKIX path validation failed: java.security.certCertPathValidatorException: validity check failed"
which successfully resolved by "engine-setup --offline"
2. Now the host at 'Unassigned' status and all VMs marked with '?' symbol.
At vdsm.log I found message:
ERROR (Reactor thread) [ProtocolDetector.SSLHandshakeDispatcher] ssl handshake: socket error, address: ::ffff:..... (sslutils:272)
At engine.log I found messages:
ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-2) [] Unable to RefreshCapabilities: VDSNetworkException: VDSGenericException: VDSNetworkException: PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed
...
2024-06-10 17:54:13,576+05 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-8) [] Unable to RefreshCapabilities: VDSNetworkException: VDSGenericException: VDSNetworkException: PKIX path validation failed: java.security.cert.CertPathValidatorException: validity check failed
Cause:
Certificate expired.
Questions:
1. How to bring host 'Online'?
2. How to properly update SSL?
7 months
Can't access (ping) other 10G interface on host from Hosted Engine
by Patrick Lomakin
Hello community. I am using the edge version of the Ovirt installation and hosts with Rocky 9. One of my tasks was to partition the networks in the cluster. To make the networks work more properly in Ovirt, I divided the network into VLANs. VLAN 2 is for “ovirtmgmt” network, VLAN 3 is for “gluster-net”, VLAN - 4 is for “migration”. I configured routing between VLANs. Each host has a 10G NIC for the gluster network, and two 1G Bonded NICs for the management network. In the cluster settings, I switched the gluster network to the configured VLAN3 in ovirt. Before that I configured a gluster network for each host via Ovirt, attached it to the 10G interface, assigned static IPs and synchronized the networks. Then I tried to connect bricks and create volume through Ovirt panel, but I got an error to access IP from VLAN3. I pinged HOST1 <---> HOST2 <---> HOST3 and the gluster network pings perfectly in either direction from any host. But the problem is that Hosted Engine can't ping the gluster IP from the 10G interface on any host. Consequently, Hosted Engine does not see the configured IP on the second 10G interface of the host. Only the first interface in the VLAN2 subnet is pinged. What can be the problem? In my eyes this is a pretty common solution that is used everywhere. Thank for any help
7 months, 1 week
Unable to access the oVirt Manager console and are also unable to connect via SSH
by Sachendra Shukla
HI Team,
We are currently unable to access the oVirt Manager console and are also
unable to connect via SSH. The error message we are receiving is: "The
server is temporarily unable to service your request due to maintenance
downtime or capacity problems. Please try again later."
Please provide a resolution if you have one.
Note: We are able to ping the oVirt Manager IP also vms are working .
Below are the snapshots for your reference.
[image: image.png]
[image: image.png]
[image: image.png]
--
Regards,
*Sachendra Shukla*
IT Administrator
Yagna iQ, Inc. and subsidiaries
Email: Sachendra.shukla(a)yagnaiq.com <dnyanesh.tisge(a)yagnaiq.com>
Website: https://yagnaiq.com
Privacy Policy: https://www.yagnaiq.com/privacy-policy/
<https://www.linkedin.com/company/yagnaiq/mycompany/>
<https://www.youtube.com/channel/UCeHXOpcUxWvOJO0aegD99Jg>
This communication and any attachments may contain confidential information
and/or Yagna iQ, Inc. copyright material.
All unauthorized use, disclosure, or distribution is prohibited. If you are
not the intended recipient, please notify Yagna iQ immediately by replying
to the email and destroy all copies of this communication.
This email has been scanned for all known viruses. The sender does not
accept liability for any damage inflicted by viewing the content of this
email.
7 months, 1 week
Re: HCI Gluster Hosted Engine unexpected behavior
by Patryk Lamakin
And when using a 2 + 1 arbiter replica, if I understand correctly, you can
only disable 1 replica host other than the arbiter? What happens in this
case if you disable only the host with the arbiter and leave the other 2
replicas running?
7 months, 1 week
HCI Gluster Hosted Engine unexpected behavior
by Patrick Lomakin
Hey, everybody. I have 3 hosts on which Gluster replica 3 volume called “engine” is deployed. When I try to put 2 of the 3 hosts into maintenance mode, my deployment crashes. I originally expected that with replica 3 I could shut down 2 of the hosts and everything would work. However, I saw that the default for Gluster is quorum server not allowing more than one host to be disabled. But even after disabling the quorum and verifying that the Gluster disk is available with one host enabled, Hosted Engine still does not access the storage. Who can explain me then the point of using replica 3 if I can't disable 2 hosts and is there any way to fix this behavior?
7 months, 1 week
Re: Problems with RHEL 9.4 hosts
by Devin A. Bougie
Unfortunately I’m not exactly sure what the problem was, but I was able to get the fully-updated EL9.4 host back in the cluster after manually deleting all of the iSCSI nodes.
Some of the iscsiadm commands printed worked fine:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m iface
bond1 tcp,<empty>,<empty>,bond1,<empty>
default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.54:3260,1 --op=new
New iSCSI node [tcp:[hw=,ip=,net_if=bond1,iscsi_if=bond1] 192.168.56.54,3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012] added
[root@lnxvirt06 ~]# iscsiadm -m node
192.168.56.54:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.001
192.168.56.54:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.001
192.168.56.56:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.101
192.168.56.56:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.101
192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012
192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012
192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112
192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.001
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.001
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.012
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.012
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.101
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.101
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.112
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.112
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.001
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.001
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.012
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.101
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.112
———
But others didn’t, where the only difference is the portal:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.55:3260,1 --op=new
iscsiadm: Error while adding record: invalid parameter
———
Likewise, I could delete some nodes using iscsiadm but not others:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.54:3260,1 --op=delete
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.55:3260,1 --op=delete
iscsiadm: Could not execute operation on all records: invalid parameter
[root@lnxvirt06 ~]# iscsiadm -m node -p 192.168.56.50 -o delete
iscsiadm: Could not execute operation on all records: invalid parameter
———
At this point I wiped out /var/lib/iscsi/, rebooted, and everything just worked.
Thanks so much for your time and help!
Sincerely,
Devin
> On Jun 7, 2024, at 10:26 AM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>
> 2024-06-07 09:59:16,720-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
> 2024-06-07 09:59:16,751-0400 INFO (jsonrpc/0) [storage.iscsi] Adding iscsi node for target 192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012 iface bond1 (iscsi:192)
> 2024-06-07 09:59:16,751-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.012', '-I', 'bond1', '-p', '192.168.56.55:3260,1', '--op=new'] (iscsiadm:104)
> 2024-06-07 09:59:16,785-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
> 2024-06-07 09:59:16,825-0400 ERROR (jsonrpc/0) [storage.storageServer] Could not configure connection to 192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012 and iface <IscsiInterface name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm: Error while adding record: invalid parameter\n') (storageServer:580)
> Can you try to run those commands manually on the host?
> And see what it gives :)
> On 7/06/2024 16:13, Devin A. Bougie wrote:
>> Thank you! I added a warning at the line you indicated, which produces the following output:
>>
>> ———
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,452-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,493-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,532-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,565-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,595-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,636-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,670-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,720-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,751-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.012', '-I', 'bond1', '-p', '192.168.56.55:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,785-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,825-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8073743.001', '-I', 'bond1', '-p', '192.168.56.54:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,856-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,889-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8073743.101', '-I', 'bond1', '-p', '192.168.56.56:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,924-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,957-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.112', '-I', 'bond1', '-p', '192.168.56.57:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,987-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,018-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.012', '-I', 'bond1', '-p', '192.168.56.51:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,051-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,079-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.001', '-I', 'bond1', '-p', '192.168.56.50:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,112-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,142-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.112', '-I', 'bond1', '-p', '192.168.56.53:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,174-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,204-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.101', '-I', 'bond1', '-p', '192.168.56.52:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,237-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,186-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,234-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,268-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,310-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,343-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,370-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,408-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,442-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> ———
>>
>> The full vdsm.log is below.
>>
>> Thanks again,
>> Devin
>>
>>
>>
>>
>> > On Jun 7, 2024, at 8:14 AM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>> >
>> > Weird, I have the same 6.2.1.9-1 version, and here it works.
>> > You can try to add some print here: https://github.com/oVirt/vdsm/blob/4d11cae0b1b7318b282d9f90788748c0ef3cc9...
>> >
>> > This should print all executed iscsiadm commands.
>> >
>> >
>> > On 6/06/2024 20:50, Devin A. Bougie wrote:
>> >> Awesome, thanks again. Yes, the host is fixed by just downgrading the iscsi-initiator-utils and iscsi-initiator-utils-iscsiuio packages from:
>> >> 6.2.1.9-1.gita65a472.el9.x86_64
>> >> to:
>> >> 6.2.1.4-3.git2a8f9d8.el9.x86_64
>> >>
>> >> Any additional pointers of where to look or how to debug the iscsiadm calls would be greatly appreciated.
>> >>
>> >> Many thanks!
>> >> Devin
>> >>
>> >>> On Jun 6, 2024, at 2:04 PM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>> >>>
>> >>> 2024-06-06 13:28:10,478-0400 ERROR (jsonrpc/5) [storage.storageServer] Could not configure connection to 192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112 and iface <IscsiInterface name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm: Error while adding record: invalid parameter\n') (storageServer:580)
>> >>>
>> >>> Seems like some issue with iscsiadm calls.
>> >>> Might want to debug which calls it does or what version change there is for iscsiadm.
>> >>>
>> >>>
>> >>>
>> >>> "Devin A. Bougie" <devin.bougie(a)cornell.edu> schreef op 6 juni 2024 19:32:29 CEST:
>> >>> Thanks so much! Yes, that patch fixed the “out of sync network” issue. However, we’re still unable to join a fully updated 9.4 host to the cluster - now with "Failed to connect Host to Storage Servers”. Downgrading all of the updated packages fixes the issue.
>> >>>
>> >>> Please see the attached vdsm.log and supervdsm.log from the host after updating it to EL 9.4 and then trying to activate it. Any more suggestions would be greatly appreciated.
>> >>>
>> >>> Thanks again,
>> >>> Devin
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>> On Jun 5, 2024, at 2:35 AM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>> >>>>
>> >>>> You most likely need the following patch:
>> >>>> https://github.com/oVirt/vdsm/commit/49eaf70c5a14eb00e85eac5f91ac36f010a9...
>> >>>>
>> >>>> Test with that, guess it's fixed then :)
>> >>>>
>> >>>> On 4/06/2024 22:33, Devin A. Bougie wrote:
>> >>>>> Are there any known incompatibilities with RHEL 9.4 (and derivatives)?
>> >>>>>
>> >>>>> We are running a 7-node ovirt 4.5.5-1.el8 self hosted engine cluster, with all of the hosts running AlmaLinux 9. After upgrading from 9.3 to 9.4, every node started flapping between “Up” and “NonOperational,” with VMs in turn migrating between hosts.
>> >>>>>
>> >>>>> I believe the underlying issue (or at least the point I got stuck at) was with two of our logical networks being stuck “out of sync” on all hosts. I was unable to synchronize networks or setup the networks using the UI. A reinstall of a host succeeded but then the host immediately reverted to the same state with the same networks being out of sync.
>> >>>>>
>> >>>>> I eventually found that if I downgraded the host from 9.4 to 9.3, it immediately became stable and back online.
>> >>>>>
>> >>>>> Are there any known incompatibilities with RHEL 9.4 (and derivatives)? If not, I’m happy to upgrade a single node to test. Please just let me know what log files and details would be most helpful in debugging what goes wrong.
>> >>>>>
>> >>>>> (And yes, I know we need to upgrade the hosted engine VM itself now that CentOS Stream 8 is now EOL).
>> >>>>>
>> >>>>> Many thanks,
>> >>>>> Devin
>> >>>>>
>> >>>>> _______________________________________________
>> >>>>> Users mailing list -- users(a)ovirt.org
>> >>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>> >>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> >>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIROQCVLPVB...
>>
>>
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIRCRFWLSG5...
>>
7 months, 1 week
Problems with RHEL 9.4 hosts
by Devin A. Bougie
Are there any known incompatibilities with RHEL 9.4 (and derivatives)?
We are running a 7-node ovirt 4.5.5-1.el8 self hosted engine cluster, with all of the hosts running AlmaLinux 9. After upgrading from 9.3 to 9.4, every node started flapping between “Up” and “NonOperational,” with VMs in turn migrating between hosts.
I believe the underlying issue (or at least the point I got stuck at) was with two of our logical networks being stuck “out of sync” on all hosts. I was unable to synchronize networks or setup the networks using the UI. A reinstall of a host succeeded but then the host immediately reverted to the same state with the same networks being out of sync.
I eventually found that if I downgraded the host from 9.4 to 9.3, it immediately became stable and back online.
Are there any known incompatibilities with RHEL 9.4 (and derivatives)? If not, I’m happy to upgrade a single node to test. Please just let me know what log files and details would be most helpful in debugging what goes wrong.
(And yes, I know we need to upgrade the hosted engine VM itself now that CentOS Stream 8 is now EOL).
Many thanks,
Devin
7 months, 1 week
No valid authentication mechanism found
by Mathieu Valois
Hi everyone,
we've configured oVirt 4.5.5 with KeyCloak OIDC using an LDAP (FreeIPA)
backend.
When we wish to login on the portal, i.e
https://example.org/ovirt-engine, using the "Administration Portal" or
login directly with the button in the upper-right corner, it works (we
get redirected to keycloack and so on).
However, if we first try to login by clicking on the "VM portal", it
fails. We get redirected back to the portal having a "No valid
authentication mechanism found" error, as seen on the following screenshot.
Then, we're stuck until we clear the cache and cookies of the browser,
even when clicking on "Admin portal".
Backend logs only show this :
2024-06-04 11:56:24,319-04 ERROR
[org.ovirt.engine.core.sso.service.SsoService] (default task-86) []
Aucun mécanisme d'authentification valide trouvé.
which translates to the message seen on the screenshot.
Do you get any lead to resolve such an issue?
Thanks for your kind support,
Mathieu Valois
Ingénieur sécurité, systèmes et réseaux
Bureau Caen : Quartier Kœnig - 153, rue Géraldine MOCK - 14760 Bretteville-sur-Odon
Bureau Vitré : Zone de la baratière - 12, route de Domalain - 35500 Vitré
02 72 34 13 20 | https://teicee.com?pk_campaign=Email
https://www.facebook.com/teicee
https://www.linkedin.com/company/t-c-e
7 months, 1 week
Connection to ovirt-imageio service has failed
by Денис Панкратьев
Hello!
I have ovirt hypervisor hosts (4.5.5, CentOS Stream 8) and hosted-engine VM (4.5.6-1.el8).
I want to upload .ISO to storage domain and get the following error:
"Connection to ovirt-imageio service has failed. Ensure that ovirt-engine certificate is registered as a valid CA in the browser."
But:
1. Valid CA installed in browser
2. On hosted-engine VM service ovirt-imageio is running (no prolbem)
What could be the problem, maybe someone has encountered it?
7 months, 1 week
Re: Any plans on oVirt Major release with new feature?
by Alessandro
I don’t understand why Oracle isn’t taking the lead seriously. Olvm and oVirt could really be the “new vsphere” for on prem IaaS. Customer are still using VM, thousands of VMs.
I am struggling inside my company to try oVirt/olvm alongside vmware but how can we be sure that oVirt/olvm will not disappear in 1 or 2 years?
I saw an interview a month ago with storware and Oracle and althogh I understood that Oracle Linux KVM will still be here for many years, I didn’t feel the same for the management part hence the engine which is OLVM.
So, the question is for Oracle (which also contribute to oVirt) for how long (how many years) will they support and develop OLVM?
I hope to receive an answare.
Regards,
Alessandro
7 months, 2 weeks