Re: HCI Gluster Hosted Engine unexpected behavior
by Patryk Lamakin
And when using a 2 + 1 arbiter replica, if I understand correctly, you can
only disable 1 replica host other than the arbiter? What happens in this
case if you disable only the host with the arbiter and leave the other 2
replicas running?
5 months, 3 weeks
HCI Gluster Hosted Engine unexpected behavior
by Patrick Lomakin
Hey, everybody. I have 3 hosts on which Gluster replica 3 volume called “engine” is deployed. When I try to put 2 of the 3 hosts into maintenance mode, my deployment crashes. I originally expected that with replica 3 I could shut down 2 of the hosts and everything would work. However, I saw that the default for Gluster is quorum server not allowing more than one host to be disabled. But even after disabling the quorum and verifying that the Gluster disk is available with one host enabled, Hosted Engine still does not access the storage. Who can explain me then the point of using replica 3 if I can't disable 2 hosts and is there any way to fix this behavior?
5 months, 3 weeks
Re: Problems with RHEL 9.4 hosts
by Devin A. Bougie
Unfortunately I’m not exactly sure what the problem was, but I was able to get the fully-updated EL9.4 host back in the cluster after manually deleting all of the iSCSI nodes.
Some of the iscsiadm commands printed worked fine:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m iface
bond1 tcp,<empty>,<empty>,bond1,<empty>
default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.54:3260,1 --op=new
New iSCSI node [tcp:[hw=,ip=,net_if=bond1,iscsi_if=bond1] 192.168.56.54,3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012] added
[root@lnxvirt06 ~]# iscsiadm -m node
192.168.56.54:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.001
192.168.56.54:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.001
192.168.56.56:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.101
192.168.56.56:3260,1 iqn.2002-10.com.infortrend:raid.sn8073743.101
192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012
192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012
192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112
192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.001
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.001
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.012
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.012
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.101
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.101
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.112
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58204.112
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.001
192.168.56.50:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.001
192.168.56.51:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.012
192.168.56.52:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.101
192.168.56.53:3260,1 iqn.2002-10.com.infortrend:raid.uid58207.112
———
But others didn’t, where the only difference is the portal:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.55:3260,1 --op=new
iscsiadm: Error while adding record: invalid parameter
———
Likewise, I could delete some nodes using iscsiadm but not others:
———
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.54:3260,1 --op=delete
[root@lnxvirt06 ~]# /sbin/iscsiadm -m node -T iqn.2002-10.com.infortrend:raid.sn8087428.012 -I bond1 -p 192.168.56.55:3260,1 --op=delete
iscsiadm: Could not execute operation on all records: invalid parameter
[root@lnxvirt06 ~]# iscsiadm -m node -p 192.168.56.50 -o delete
iscsiadm: Could not execute operation on all records: invalid parameter
———
At this point I wiped out /var/lib/iscsi/, rebooted, and everything just worked.
Thanks so much for your time and help!
Sincerely,
Devin
> On Jun 7, 2024, at 10:26 AM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>
> 2024-06-07 09:59:16,720-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
> 2024-06-07 09:59:16,751-0400 INFO (jsonrpc/0) [storage.iscsi] Adding iscsi node for target 192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012 iface bond1 (iscsi:192)
> 2024-06-07 09:59:16,751-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.012', '-I', 'bond1', '-p', '192.168.56.55:3260,1', '--op=new'] (iscsiadm:104)
> 2024-06-07 09:59:16,785-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
> 2024-06-07 09:59:16,825-0400 ERROR (jsonrpc/0) [storage.storageServer] Could not configure connection to 192.168.56.55:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.012 and iface <IscsiInterface name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm: Error while adding record: invalid parameter\n') (storageServer:580)
> Can you try to run those commands manually on the host?
> And see what it gives :)
> On 7/06/2024 16:13, Devin A. Bougie wrote:
>> Thank you! I added a warning at the line you indicated, which produces the following output:
>>
>> ———
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,452-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,493-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,532-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,565-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,595-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,636-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,670-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,720-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,751-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.012', '-I', 'bond1', '-p', '192.168.56.55:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,785-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,825-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8073743.001', '-I', 'bond1', '-p', '192.168.56.54:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,856-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,889-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8073743.101', '-I', 'bond1', '-p', '192.168.56.56:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,924-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,957-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.sn8087428.112', '-I', 'bond1', '-p', '192.168.56.57:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:16,987-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,018-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.012', '-I', 'bond1', '-p', '192.168.56.51:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,051-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,079-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.001', '-I', 'bond1', '-p', '192.168.56.50:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,112-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,142-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.112', '-I', 'bond1', '-p', '192.168.56.53:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,174-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,204-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'node', '-T', 'iqn.2002-10.com.infortrend:raid.uid58204.101', '-I', 'bond1', '-p', '192.168.56.52:3260,1', '--op=new'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:17,237-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,186-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,234-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,268-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,310-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,343-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,370-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,408-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> /var/log/vdsm/vdsm.log:2024-06-07 09:59:44,442-0400 WARN (jsonrpc/0) [storage.iscsiadm] iscsiadm executing: ['/sbin/iscsiadm', '-m', 'iface'] (iscsiadm:104)
>> ———
>>
>> The full vdsm.log is below.
>>
>> Thanks again,
>> Devin
>>
>>
>>
>>
>> > On Jun 7, 2024, at 8:14 AM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>> >
>> > Weird, I have the same 6.2.1.9-1 version, and here it works.
>> > You can try to add some print here: https://github.com/oVirt/vdsm/blob/4d11cae0b1b7318b282d9f90788748c0ef3cc9...
>> >
>> > This should print all executed iscsiadm commands.
>> >
>> >
>> > On 6/06/2024 20:50, Devin A. Bougie wrote:
>> >> Awesome, thanks again. Yes, the host is fixed by just downgrading the iscsi-initiator-utils and iscsi-initiator-utils-iscsiuio packages from:
>> >> 6.2.1.9-1.gita65a472.el9.x86_64
>> >> to:
>> >> 6.2.1.4-3.git2a8f9d8.el9.x86_64
>> >>
>> >> Any additional pointers of where to look or how to debug the iscsiadm calls would be greatly appreciated.
>> >>
>> >> Many thanks!
>> >> Devin
>> >>
>> >>> On Jun 6, 2024, at 2:04 PM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>> >>>
>> >>> 2024-06-06 13:28:10,478-0400 ERROR (jsonrpc/5) [storage.storageServer] Could not configure connection to 192.168.56.57:3260,1 iqn.2002-10.com.infortrend:raid.sn8087428.112 and iface <IscsiInterface name='bond1' transport='tcp' netIfaceName='bond1'>: (7, b'', b'iscsiadm: Error while adding record: invalid parameter\n') (storageServer:580)
>> >>>
>> >>> Seems like some issue with iscsiadm calls.
>> >>> Might want to debug which calls it does or what version change there is for iscsiadm.
>> >>>
>> >>>
>> >>>
>> >>> "Devin A. Bougie" <devin.bougie(a)cornell.edu> schreef op 6 juni 2024 19:32:29 CEST:
>> >>> Thanks so much! Yes, that patch fixed the “out of sync network” issue. However, we’re still unable to join a fully updated 9.4 host to the cluster - now with "Failed to connect Host to Storage Servers”. Downgrading all of the updated packages fixes the issue.
>> >>>
>> >>> Please see the attached vdsm.log and supervdsm.log from the host after updating it to EL 9.4 and then trying to activate it. Any more suggestions would be greatly appreciated.
>> >>>
>> >>> Thanks again,
>> >>> Devin
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>> On Jun 5, 2024, at 2:35 AM, Jean-Louis Dupond <jean-louis(a)dupond.be> wrote:
>> >>>>
>> >>>> You most likely need the following patch:
>> >>>> https://github.com/oVirt/vdsm/commit/49eaf70c5a14eb00e85eac5f91ac36f010a9...
>> >>>>
>> >>>> Test with that, guess it's fixed then :)
>> >>>>
>> >>>> On 4/06/2024 22:33, Devin A. Bougie wrote:
>> >>>>> Are there any known incompatibilities with RHEL 9.4 (and derivatives)?
>> >>>>>
>> >>>>> We are running a 7-node ovirt 4.5.5-1.el8 self hosted engine cluster, with all of the hosts running AlmaLinux 9. After upgrading from 9.3 to 9.4, every node started flapping between “Up” and “NonOperational,” with VMs in turn migrating between hosts.
>> >>>>>
>> >>>>> I believe the underlying issue (or at least the point I got stuck at) was with two of our logical networks being stuck “out of sync” on all hosts. I was unable to synchronize networks or setup the networks using the UI. A reinstall of a host succeeded but then the host immediately reverted to the same state with the same networks being out of sync.
>> >>>>>
>> >>>>> I eventually found that if I downgraded the host from 9.4 to 9.3, it immediately became stable and back online.
>> >>>>>
>> >>>>> Are there any known incompatibilities with RHEL 9.4 (and derivatives)? If not, I’m happy to upgrade a single node to test. Please just let me know what log files and details would be most helpful in debugging what goes wrong.
>> >>>>>
>> >>>>> (And yes, I know we need to upgrade the hosted engine VM itself now that CentOS Stream 8 is now EOL).
>> >>>>>
>> >>>>> Many thanks,
>> >>>>> Devin
>> >>>>>
>> >>>>> _______________________________________________
>> >>>>> Users mailing list -- users(a)ovirt.org
>> >>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>> >>>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> >>>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> >>>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PIROQCVLPVB...
>>
>>
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIRCRFWLSG5...
>>
5 months, 4 weeks
Problems with RHEL 9.4 hosts
by Devin A. Bougie
Are there any known incompatibilities with RHEL 9.4 (and derivatives)?
We are running a 7-node ovirt 4.5.5-1.el8 self hosted engine cluster, with all of the hosts running AlmaLinux 9. After upgrading from 9.3 to 9.4, every node started flapping between “Up” and “NonOperational,” with VMs in turn migrating between hosts.
I believe the underlying issue (or at least the point I got stuck at) was with two of our logical networks being stuck “out of sync” on all hosts. I was unable to synchronize networks or setup the networks using the UI. A reinstall of a host succeeded but then the host immediately reverted to the same state with the same networks being out of sync.
I eventually found that if I downgraded the host from 9.4 to 9.3, it immediately became stable and back online.
Are there any known incompatibilities with RHEL 9.4 (and derivatives)? If not, I’m happy to upgrade a single node to test. Please just let me know what log files and details would be most helpful in debugging what goes wrong.
(And yes, I know we need to upgrade the hosted engine VM itself now that CentOS Stream 8 is now EOL).
Many thanks,
Devin
5 months, 4 weeks
No valid authentication mechanism found
by Mathieu Valois
Hi everyone,
we've configured oVirt 4.5.5 with KeyCloak OIDC using an LDAP (FreeIPA)
backend.
When we wish to login on the portal, i.e
https://example.org/ovirt-engine, using the "Administration Portal" or
login directly with the button in the upper-right corner, it works (we
get redirected to keycloack and so on).
However, if we first try to login by clicking on the "VM portal", it
fails. We get redirected back to the portal having a "No valid
authentication mechanism found" error, as seen on the following screenshot.
Then, we're stuck until we clear the cache and cookies of the browser,
even when clicking on "Admin portal".
Backend logs only show this :
2024-06-04 11:56:24,319-04 ERROR
[org.ovirt.engine.core.sso.service.SsoService] (default task-86) []
Aucun mécanisme d'authentification valide trouvé.
which translates to the message seen on the screenshot.
Do you get any lead to resolve such an issue?
Thanks for your kind support,
Mathieu Valois
Ingénieur sécurité, systèmes et réseaux
Bureau Caen : Quartier Kœnig - 153, rue Géraldine MOCK - 14760 Bretteville-sur-Odon
Bureau Vitré : Zone de la baratière - 12, route de Domalain - 35500 Vitré
02 72 34 13 20 | https://teicee.com?pk_campaign=Email
https://www.facebook.com/teicee
https://www.linkedin.com/company/t-c-e
6 months
Connection to ovirt-imageio service has failed
by Денис Панкратьев
Hello!
I have ovirt hypervisor hosts (4.5.5, CentOS Stream 8) and hosted-engine VM (4.5.6-1.el8).
I want to upload .ISO to storage domain and get the following error:
"Connection to ovirt-imageio service has failed. Ensure that ovirt-engine certificate is registered as a valid CA in the browser."
But:
1. Valid CA installed in browser
2. On hosted-engine VM service ovirt-imageio is running (no prolbem)
What could be the problem, maybe someone has encountered it?
6 months
Re: Any plans on oVirt Major release with new feature?
by Alessandro
I don’t understand why Oracle isn’t taking the lead seriously. Olvm and oVirt could really be the “new vsphere” for on prem IaaS. Customer are still using VM, thousands of VMs.
I am struggling inside my company to try oVirt/olvm alongside vmware but how can we be sure that oVirt/olvm will not disappear in 1 or 2 years?
I saw an interview a month ago with storware and Oracle and althogh I understood that Oracle Linux KVM will still be here for many years, I didn’t feel the same for the management part hence the engine which is OLVM.
So, the question is for Oracle (which also contribute to oVirt) for how long (how many years) will they support and develop OLVM?
I hope to receive an answare.
Regards,
Alessandro
6 months
VM Snapshot tab error Code 500
by doboscsongor@gmail.com
Hi everybody,
I'm new to the oVirt environment, so I started with a fresh install on two physical servers using a hosted engine scenario. Currently, I have version 4.5.6-1.el8 installed. For storage, I am using FC to access our storage.
I have created a test VM for testing purposes where I installed MS Windows Server 2022. On the VM, I installed the guest tools, and everything is working fine except for the snapshots tab. If I don't have any snapshots for the VM, I can view the snapshots tab, but if I create a snapshot, the snapshots tab shows an error:
"Error while executing action: A request to the server failed with the following status code: 500."
Does anyone have any idea why I am getting this error?
Thanks in advance!
6 months