Hi,
We switched from Gluster to NFS provided by SAN array: maybe it was matter of combination
of factors (configuration/version/whatever),
but it was unstable for us.
SPICE/QXL in RHEL 9: yeah, I understand that for some people it is important (I saw that
someone is doing some forks whatever)
I think that ovirt 4.5 (nightly build __) might be OK for some time, but I think that
alternatives are:
-OpenStack for larger setups (but be careful with distribution -as I remember Red Hat
abandons TripleO and introduces OpenShift stuff for installation of OpenStack)
-ProxMox and CloudStack for all sizes.
-Maybe XCP-ng + (paid?) XenOrchestra, but I trust KVM/QEMU more than Xen __
OpenShift Virtualization/OKD Virtualization - I don't know...
Actually might be good if someone specifically comments on going from ovirt to OpenShift
Virtualization/OKD Virtualization.
Not sure if this statement below
https://news.ycombinator.com/item?id=32832999 is still
correct and what exactly are the consequences of 'OpenShift Virtualization is just to
give a path/time to migrate to containers'
"The whole purpose behind OpenShift Virtualization is to aid in organization
modernization as a way to consolidate workloads onto a single platform while giving app
dev time to migrate their work to containers and microservice based deployments."
BR,
Konstantin
Am 13.07.23, 09:10 schrieb "Alex McWhirter" <alex(a)triadic.us
<mailto:alex@triadic.us>>:
We still have a few oVirt and RHV installs kicking around, but between
this and some core features we use being removed from el8/9 (gluster,
spice / qxl, and probably others soon at this rate) we've heavily been
shifting gears away from both Red Hat and oVirt. Not to mention the
recent drama...
In the past we toyed around with the idea of helping maintain oVirt, but
with the list of things we'd need to support growing beyond oVirt and
into other bits as well, we aren't equipped to fight on multiple fronts
so to speak.
For the moment we've found a home with SUSE / Apache CloudStack, and
when el7 EOL's that's likely going to be our entire stack moving
forward.
On 2023-07-13 02:21, eshwayri(a)gmail.com <mailto:eshwayri@gmail.com> wrote:
I am beginning to have very similar thoughts. It's working fine
for
me now, but at some point something big is going to break. I already
have VMWare running, and in fact, my two ESXi nodes have the exact
same hardware as my two KVM nodes. Would be simple to do, but I
really don't want to go just yet. At the same time, I don't want to
be the last person turning off the lights. Difficult times.
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
<
https://www.ovirt.org/privacy-policy.html>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<
https://www.ovirt.org/community/about/community-guidelines/>
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJFIRAT6TNC...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJFIRAT6TNC...
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
<
https://www.ovirt.org/privacy-policy.html>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<
https://www.ovirt.org/community/about/community-guidelines/>
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EFBHZ76GZ73...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EFBHZ76GZ73...